On Tuesday, 24 October 2017 at 16:18:03 UTC, H. S. Teoh wrote:
On Tue, Oct 24, 2017 at 10:02:11AM +0000, Arun Chandrasekaran
via Digitalmars-d-learn wrote:
On Monday, 23 October 2017 at 18:08:43 UTC, Ali Çehreli wrote:
> On 10/23/2017 07:22 AM, Arun Chandrasekaran wrote:
> > [...]
> The rule is that every expression has a type and 22/7 is int.
Thanks Ali. Is this for backward compatibility with C?
Because, if there is a division, a natural/mathematical (not
programmatic) expectation is to see a a double in the result.
[...]
I have never seen a programming language in which dividing two
integers yields a float or double. Either numbers default to a
floating point type, in which case you begin with floats in the
first place, or division is integer division, yielding an
integer result.
T
I'm not denying that all the programming languages does it this
way (even if it is a cause of related bugs).
I'm just questioning the reasoning behind why D does it this way
and if it is for compatibility or if there is any other reasoning
behind the decision.
Arun