On Thursday, 11 November 2021 at 14:52:45 UTC, Stanislav Blinov wrote:
On Thursday, 11 November 2021 at 09:11:37 UTC, Salih Dincer wrote:

Unless explicitly set, default type is int. 10000100000 is greater than int.max.
100001
```d
  enum w = 100_000;
  size_t b = w * w;
  // size_t b = 100000 * 100000; // ???
  assert(b == 10_000_000_000); // Assert Failure
```
The w!(int) is not greater than the b!(size_t)...

That code is
```
size_t b = int(w) * int(w);
```

That is, `multiply two ints and assign result to a size_t`. Multiplication of two ints is still an int though, and you can't fit ten billion in an int, so that's overflow. It doesn't matter that you declare `b` as `size_t` here. Overflow happens before that assignment.

Thank you all :)

DMD still has to type inference...

I think the safest and most practical method is to explicitly use double types:

```d
import std.stdio;

enum factors { n =  1e+9, n1 }

auto gauss (double a = factors.n,
            double b = factors.n1)
{ return cast(size_t)(a * b)/2; }

void main()
{
  gauss.writeln;

  ulong.max.writeln;
}
```

Reply via email to