Re: accuracy of floating point calculations: d vs cpp

2019-07-22 Thread Timon Gehr via Digitalmars-d-learn

On 22.07.19 14:49, drug wrote:
I have almost identical (I believe it at least) implementation (D and 
C++) of the same algorithm that uses Kalman filtering. These 
implementations though show different results (least significant 
digits). Before I start investigating I would like to ask if this issue 
(different results of floating points calculation for D and C++) is well 
known? May be I can read something about that in web? Does D 
implementation of floating point types is different than the one of C++?


Most of all I'm interesting in equal results to ease comparing outputs 
of both implementations between each other. The accuracy itself is 
enough in my case, but this difference is annoying in some cases.


This is probably not your problem, but it may be good to know anyway: D 
allows compilers to perform arbitrary "enhancement" of floating-point 
precision for parts of the computation, including those performed at 
compile time. I think this is stupid, but I haven't been able to 
convince Walter.


Re: segfault in ldc release only - looks like some kind of optimization bug?

2019-07-22 Thread Exil via Digitalmars-d-learn

On Tuesday, 23 July 2019 at 00:54:08 UTC, aliak wrote:

On Tuesday, 23 July 2019 at 00:36:49 UTC, Exil wrote:

auto ref get(T)(W!T value) {
return value.front;
}

You're returning a reference to a temporary that gets deleted 
at the end of the function's scope. The "auto ref" here will 
be a "ref".


. oh ... shit you're right.

Ok so this was minimized from this:

const config = Config.ghApp(ghDomain)
.orElseThrow!(() => new Exception(
"could not find config for domain 
'%s'".format(ghDomain)

));

Where Config.ghApp return an Optional!GhApp, and orElseThrow 
checks if a range has is not empty and returns front. The front 
in Optional is defined as the front above...


So is that an incorrect idiom to use when writing a library 
then? I pretty sure I've seen it in phobos too.


Slapping return on the function also fixes it. Is that the 
correct way to write a .front?


Thanks!


Yes you can use "return". It basically tells the compiler that 
the function or method returns something that is referenced by a 
passed in parameter so to keep it alive.


https://dlang.org/spec/function.html#return-ref-parameters

Your orElseThrow() probably shouldn't be taking in a copy though. 
That's the catch, then you can't use it in a UFCS chain like you 
are using now. Using "return" is not an ideal fix, as it is still 
going to be calling the destructor on the object. So you might 
run into an issue somewhere. It just so happens to generate 
assembly that works though I guess when there is no destructor.







Re: segfault in ldc release only - looks like some kind of optimization bug?

2019-07-22 Thread aliak via Digitalmars-d-learn

On Tuesday, 23 July 2019 at 00:36:49 UTC, Exil wrote:

auto ref get(T)(W!T value) {
return value.front;
}

You're returning a reference to a temporary that gets deleted 
at the end of the function's scope. The "auto ref" here will be 
a "ref".


. oh ... shit you're right.

Ok so this was minimized from this:

const config = Config.ghApp(ghDomain)
.orElseThrow!(() => new Exception(
"could not find config for domain 
'%s'".format(ghDomain)

));

Where Config.ghApp return an Optional!GhApp, and orElseThrow 
checks if a range has is not empty and returns front. The front 
in Optional is defined as the front above...


So is that an incorrect idiom to use when writing a library then? 
I pretty sure I've seen it in phobos too.


Slapping return on the function also fixes it. Is that the 
correct way to write a .front?


Thanks!


Re: Why does a switch break cause a segmentation fault

2019-07-22 Thread Eric via Digitalmars-d-learn

Shouldn't (stream == null) be (stream is null)? 

-Eric 


From: "adamgoldberg via Digitalmars-d-learn" 
 
To: digitalmars-d-learn@puremagic.com 
Sent: Monday, July 22, 2019 3:05:17 PM 
Subject: Why does a switch break cause a segmentation fault 

Hey, I just happened to be writing a program in D an stumbled 
upon a bug, that causes it to terminate after receiving a SEGV 
signal, nothing wierd so far but it looks everything I tried 
shows it is the break statement inside of a switch. 
It seems to have a relatively random chance of occuring, and also 
somewhat dependant on the compiler, and build mode used. 
I'm short on time so instead of rewriting my SO post I will just 
link it. 

Here! 
https://stackoverflow.com/questions/57153617/random-segmentation-fault-in-d-lang-on-switch-break
 

Hope someone can help! 


Re: segfault in ldc release only - looks like some kind of optimization bug?

2019-07-22 Thread Exil via Digitalmars-d-learn

auto ref get(T)(W!T value) {
return value.front;
}

You're returning a reference to a temporary that gets deleted at 
the end of the function's scope. The "auto ref" here will be a 
"ref".


segfault in ldc release only - looks like some kind of optimization bug?

2019-07-22 Thread aliak via Digitalmars-d-learn

Hi,

so I had this weird bug that was driving me crazy and only 
segfaulted with ldc in release build - (I'm using ldc 1.16.0).


This is the code that segfaults. All parts seem to be necessary 
for it to happen, or at least I think so. I've gone in circles 
minimizing so I've probably missed something. But this seems to 
be it:


import std;

struct W(T) {
T value;
ref inout(T) front() inout { return value; }
}

auto ref get(T)(W!T value) {
return value.front;
}

struct S {
string a;
}

void main() {
S[string] aa;
aa = ["one" : S("a")];
auto b = W!S(aa["one"]).get;
writeln(b);
}

Running with ldc and -O3 crashes: "ldc2 -O3 -run source.d"

Some things I've noticed:
- if you remove the call to .get and use .value directly, the 
crash goes

- if you remove the inout specifier on front(), the crash goes
- if you remove the ref specifier on front(), the crash goes
- if you don't call writeln(b), the crash goes
- if you don't use the s returned by the aa, the crash goes
- if you *add* a return qualifier on the "front" function, the 
crash goes away


(what is that return thing btw and when do you use it?)

Any ideas?



Re: Why does a switch break cause a segmentation fault

2019-07-22 Thread Exil via Digitalmars-d-learn

On Monday, 22 July 2019 at 22:05:17 UTC, adamgoldberg wrote:
Hey, I just happened to be writing a program in D an stumbled 
upon a bug, that causes it to terminate after receiving a SEGV 
signal, nothing wierd so far but it looks everything I tried 
shows it is the break statement inside of a switch.
It seems to have a relatively random chance of occuring, and 
also somewhat dependant on the compiler, and build mode used.
I'm short on time so instead of rewriting my SO post I will 
just link it.


Here! 
https://stackoverflow.com/questions/57153617/random-segmentation-fault-in-d-lang-on-switch-break


Hope someone can help!


Could be the statement in the actual switch(), which is accessing 
a pointer "codecpar".


switch (stream.codecpar.codec_type)
^

This could be null and you aren't checking for it. I find that D 
sometimes doesn't have the correct line numbers for debug info, 
even when not doing an optimized build. So it could really be 
anything in that function.


The root cause could be a lot of things though, some bad codegen 
or otherwise. Could add a check to make sure though.







Re: Any easy way to check if an object have inherited an interface?

2019-07-22 Thread rikki cattermole via Digitalmars-d-learn

On 23/07/2019 9:34 AM, solidstate1991 wrote:
It seems that I've to write my own function that searches in the given 
object's classinfo.interfaces since I couldn't find anything related in 
Phobos.


if (Foo foo = cast(Bar)bar) {

}


Why does a switch break cause a segmentation fault

2019-07-22 Thread adamgoldberg via Digitalmars-d-learn
Hey, I just happened to be writing a program in D an stumbled 
upon a bug, that causes it to terminate after receiving a SEGV 
signal, nothing wierd so far but it looks everything I tried 
shows it is the break statement inside of a switch.
It seems to have a relatively random chance of occuring, and also 
somewhat dependant on the compiler, and build mode used.
I'm short on time so instead of rewriting my SO post I will just 
link it.


Here! 
https://stackoverflow.com/questions/57153617/random-segmentation-fault-in-d-lang-on-switch-break


Hope someone can help!


Any easy way to check if an object have inherited an interface?

2019-07-22 Thread solidstate1991 via Digitalmars-d-learn
It seems that I've to write my own function that searches in the 
given object's classinfo.interfaces since I couldn't find 
anything related in Phobos.


Re: Why is Throwable.TraceInfo.toString not @safe?

2019-07-22 Thread Johannes Loher via Digitalmars-d-learn
Am 22.07.19 um 20:38 schrieb Jonathan M Davis:
> On Monday, July 22, 2019 1:29:21 AM MDT Johannes Loher via Digitalmars-d-
> learn wrote:
>> Am 22.07.19 um 05:16 schrieb Paul Backus:
>>> On Sunday, 21 July 2019 at 18:03:33 UTC, Johannes Loher wrote:
 I'd like to log stacktraces of caught exceptions in an @safe manner.
 However, Throwable.TraceInfo.toString is not @safe (or @trusted), so
 this is not possible. Why is it not @safe? Can it be @trusted?

 Thanks for your help!
>>>
>>> Seems like it's because it uses the form of toString that accepts a
>>> delegate [1], and that delegate parameter is not marked as @safe.
>>>
>>> [1] https://dlang.org/phobos/object.html#.Throwable.toString.2
>>
>> I'm not talking about Throwable's toString method, but about
>> Throwable.TraceInfo's. Throwable.TraceInfo is an Interface inside
>> Throwable:
>>
>> interface TraceInfo
>> {
>> int opApply(scope int delegate(ref const(char[]))) const;
>> int opApply(scope int delegate(ref size_t, ref const(char[]))) const;
>> string toString() const;
>> }
>>
>> Throwable has a member info of type TraceInfo. It is never explicitly
>> set, so I assume it is automatically set by runtime magic. This is the
>> constructor of Throwable:
>>
>> @nogc @safe pure nothrow this(string msg, Throwable nextInChain = null)
>> {
>> this.msg = msg;
>> this.nextInChain = nextInChain;
>> //this.info = _d_traceContext();
>> }
>>
>> In theory people could define their own exception classes which provide
>> their own implementation of TraceInfo, but I never heard of anybody
>> doing this. So the real question is if the toString method of of the
>> DRuntime implementations (I assume there might be different
>> implementations for different platforms) are actually @safe (or
>> @trusted) and why we do not mark the interface to be @safe then.
> 
> All of that stuff predates @safe and most if not all attributes. @safe and
> the like have been added to some of that stuff, but in some cases, doing so
> would break code. I'd have to look at TraceInfo in more detail to know
> whether it can reasonably be marked @safe, but if it might end up calling
> anything that isn't guaranteed to be @safe, then it probably can't be (e.g.
> if it calls Object's toString). Also, TraceInfo is really more for
> druntime's use than for your typical programmer to use it directly in their
> program. So, I wouldn't be surprised in the least if not much effort was
> ever put in to making it @safe-friendly even if it could be. In general,
> stuff like @safe has tended to be applied to stuff in druntime and Phobos an
> a case-by-case basis, so it's really not surprising when an attribute is
> missing when it arguably should be there (though it frequently can't be
> there due to stuff like template arguments).
> 
> You _do_ need to be very careful with attributes and interfaces/classes
> though, because once an attribute is or isn't there, that can lock every
> derived class into a particular set of attributes. So, it's not always
> straightforward whether an attribute should be present or not. There are
> plenty of cases where ideally it would be present, but it can't be (e.g. a
> number of functions on TimeZone aren't pure even though they could be for
> _most_ derived classes, because they can't be pure for LocalTime), and there
> are cases where an attribute should be present but was simply never added. I
> don't know where TraceInfo sits, since I haven't dug into it.
> 
> - Jonathan M Davis
> 
> 
> 

Thanks for your insights, I already guessed that it is simply too old
for @safe and friends...

I understand that we have to be careful with adding attributes to
interfaces / classes, but in this particular case, I really cannot
imagine a usecase where we would not want it to be @safe (and the
situation here is better than with your example of TimeZone because we
have the @trusted escape hatch). Also I cannot imagine anybody
implementing their own version of TraceInfo.

The thing is that it can actually be quite useful to access TraceInfo in
user code, exactly for the example I gave in the beginning: Logging the
stacktrace. For long running programs, it is common practice in the
industry to log the stacktraces of caught exceptions in error cases
which do not mandate a program crash, simply to make a postmortem
analysis even possible. And in my opinion, user code should be @safe as
much as possible. If we cannot mark this @trusted, it means that a whole
lot of user code cannot be @safe, simply because of logging which would
be really weird...

I had a quick look at the actual implementations of TraceInfo in
DRuntime. There are 2 very simple implementations: SuppressTraceInfo (in
core/exception.d) and StackTrace (in core/sys/windows/stacktrace.d). In
both cases, toString can be @trusted. The first is simply a dummy
implementation (just returns null) and the second one is actuall marked
@trusted (and also has quite a simple implementation).

What gi

Re: Why is Throwable.TraceInfo.toString not @safe?

2019-07-22 Thread Jonathan M Davis via Digitalmars-d-learn
On Monday, July 22, 2019 1:29:21 AM MDT Johannes Loher via Digitalmars-d-
learn wrote:
> Am 22.07.19 um 05:16 schrieb Paul Backus:
> > On Sunday, 21 July 2019 at 18:03:33 UTC, Johannes Loher wrote:
> >> I'd like to log stacktraces of caught exceptions in an @safe manner.
> >> However, Throwable.TraceInfo.toString is not @safe (or @trusted), so
> >> this is not possible. Why is it not @safe? Can it be @trusted?
> >>
> >> Thanks for your help!
> >
> > Seems like it's because it uses the form of toString that accepts a
> > delegate [1], and that delegate parameter is not marked as @safe.
> >
> > [1] https://dlang.org/phobos/object.html#.Throwable.toString.2
>
> I'm not talking about Throwable's toString method, but about
> Throwable.TraceInfo's. Throwable.TraceInfo is an Interface inside
> Throwable:
>
> interface TraceInfo
> {
> int opApply(scope int delegate(ref const(char[]))) const;
> int opApply(scope int delegate(ref size_t, ref const(char[]))) const;
> string toString() const;
> }
>
> Throwable has a member info of type TraceInfo. It is never explicitly
> set, so I assume it is automatically set by runtime magic. This is the
> constructor of Throwable:
>
> @nogc @safe pure nothrow this(string msg, Throwable nextInChain = null)
> {
> this.msg = msg;
> this.nextInChain = nextInChain;
> //this.info = _d_traceContext();
> }
>
> In theory people could define their own exception classes which provide
> their own implementation of TraceInfo, but I never heard of anybody
> doing this. So the real question is if the toString method of of the
> DRuntime implementations (I assume there might be different
> implementations for different platforms) are actually @safe (or
> @trusted) and why we do not mark the interface to be @safe then.

All of that stuff predates @safe and most if not all attributes. @safe and
the like have been added to some of that stuff, but in some cases, doing so
would break code. I'd have to look at TraceInfo in more detail to know
whether it can reasonably be marked @safe, but if it might end up calling
anything that isn't guaranteed to be @safe, then it probably can't be (e.g.
if it calls Object's toString). Also, TraceInfo is really more for
druntime's use than for your typical programmer to use it directly in their
program. So, I wouldn't be surprised in the least if not much effort was
ever put in to making it @safe-friendly even if it could be. In general,
stuff like @safe has tended to be applied to stuff in druntime and Phobos an
a case-by-case basis, so it's really not surprising when an attribute is
missing when it arguably should be there (though it frequently can't be
there due to stuff like template arguments).

You _do_ need to be very careful with attributes and interfaces/classes
though, because once an attribute is or isn't there, that can lock every
derived class into a particular set of attributes. So, it's not always
straightforward whether an attribute should be present or not. There are
plenty of cases where ideally it would be present, but it can't be (e.g. a
number of functions on TimeZone aren't pure even though they could be for
_most_ derived classes, because they can't be pure for LocalTime), and there
are cases where an attribute should be present but was simply never added. I
don't know where TraceInfo sits, since I haven't dug into it.

- Jonathan M Davis





Re: accuracy of floating point calculations: d vs cpp

2019-07-22 Thread drug via Digitalmars-d-learn

22.07.2019 17:19, drug пишет:

22.07.2019 16:26, Guillaume Piolat пишет:


Typical floating point operations in single-precision like a simple 
(a * b) + c will provide a -140dB difference if order is changed. 
It's likely the order of operations is not the same in your program, 
so the least significant digit should be different.


What I would recommend is compute the mean relative error, in double, 
and if it's below -200 dB, not bother. This is an incredibly low 
relative error of 0.0001%.
You will have no difficulty making your D program deterministic, but 
knowing exactly where the C++ and D differ will be long and serve no 
purpose.
Unfortunately error has been turned out to be much bigger than I guessed 
before. So obviously there is a problem either on D side or on C++ side. 
Error is too huge to ignore it.


There was a typo in C++ implementation. I did simple-n-dirt Python 
version and after the typo fixed all three implementations show the same 
result if one filter update occurs. But if several updates happen a 
subtle difference exists nevertheless, error accumulates somewhere else 
- time for numerical methods using.


Re: accuracy of floating point calculations: d vs cpp

2019-07-22 Thread drug via Digitalmars-d-learn

22.07.2019 16:26, Guillaume Piolat пишет:


Typical floating point operations in single-precision like a simple (a 
* b) + c will provide a -140dB difference if order is changed. It's 
likely the order of operations is not the same in your program, so the 
least significant digit should be different.


What I would recommend is compute the mean relative error, in double, 
and if it's below -200 dB, not bother. This is an incredibly low 
relative error of 0.0001%.
You will have no difficulty making your D program deterministic, but 
knowing exactly where the C++ and D differ will be long and serve no 
purpose.
Unfortunately error has been turned out to be much bigger than I guessed 
before. So obviously there is a problem either on D side or on C++ side. 
Error is too huge to ignore it.


Re: accuracy of floating point calculations: d vs cpp

2019-07-22 Thread Dennis via Digitalmars-d-learn

On Monday, 22 July 2019 at 12:49:24 UTC, drug wrote:
Before I start investigating I would like to ask if this issue 
(different results of floating points calculation for D and 
C++) is well known?


This likely has little to do with the language, and more with the 
implementation. Basic floating point operations at the same 
precision should give the same results. There can be differences 
in float printing (see [1]) and math functions (sqrt, cos, pow 
etc.) however.


Tips for getting consistent results between C/C++ and D:
- Use the same backend, so compare DMD with DMC, LDC with CLANG 
and GDC with GCC.
- Use the same C runtime library. On Unix glibc will likely be 
the default, on Windows you likely use snn.lib, libcmt.lib or 
msvcrt.dll.

- On the D side, use core.stdc.math instead of std.math
- Use the same optimizations. (Don't use -ffast-math for C)

[1] 
https://forum.dlang.org/post/fndyoiawueefqoeob...@forum.dlang.org


Re: accuracy of floating point calculations: d vs cpp

2019-07-22 Thread Guillaume Piolat via Digitalmars-d-learn

On Monday, 22 July 2019 at 13:23:26 UTC, Guillaume Piolat wrote:

On Monday, 22 July 2019 at 12:49:24 UTC, drug wrote:
I have almost identical (I believe it at least) implementation 
(D and C++) of the same algorithm that uses Kalman filtering. 
These implementations though show different results (least 
significant digits). Before I start investigating I would like 
to ask if this issue (different results of floating points 
calculation for D and C++) is well known? May be I can read 
something about that in web? Does D implementation of floating 
point types is different than the one of C++?


Most of all I'm interesting in equal results to ease comparing 
outputs of both implementations between each other. The 
accuracy itself is enough in my case, but this difference is 
annoying in some cases.


Typical floating point operations in single-precision like a 
simple (a * b) + c will provide a -140dB difference if order is 
changed. It's likely the order of operations is not the same in 
your program, so the least significant digit should be 
different.


What I would recommend is compute the mean relative error, in 
double, and if it's below -200 dB, not bother. This is an 
incredibly low relative error of 0.0001%.
You will have no difficulty making your D program deterministic, 
but knowing exactly where the C++ and D differ will be long and 
serve no purpose.


Re: accuracy of floating point calculations: d vs cpp

2019-07-22 Thread Guillaume Piolat via Digitalmars-d-learn

On Monday, 22 July 2019 at 12:49:24 UTC, drug wrote:
I have almost identical (I believe it at least) implementation 
(D and C++) of the same algorithm that uses Kalman filtering. 
These implementations though show different results (least 
significant digits). Before I start investigating I would like 
to ask if this issue (different results of floating points 
calculation for D and C++) is well known? May be I can read 
something about that in web? Does D implementation of floating 
point types is different than the one of C++?


Most of all I'm interesting in equal results to ease comparing 
outputs of both implementations between each other. The 
accuracy itself is enough in my case, but this difference is 
annoying in some cases.


Typical floating point operations in single-precision like a 
simple (a * b) + c will provide a -140dB difference if order is 
changed. It's likely the order of operations is not the same in 
your program, so the least significant digit should be different.





Re: accuracy of floating point calculations: d vs cpp

2019-07-22 Thread rikki cattermole via Digitalmars-d-learn

On 23/07/2019 12:58 AM, drug wrote:

22.07.2019 15:53, rikki cattermole пишет:


https://godbolt.org/z/EtZLG0


hmm, in short - this is my local problem?


That is not how I would describe it.

I would describe it as IEEE-754 doing what IEEE-754 is good at.
But my point is, you can get the results to match up, if you care about it.


Re: Is there a way to bypass the file and line into D assert function ?

2019-07-22 Thread Newbie2019 via Digitalmars-d-learn

On Monday, 22 July 2019 at 09:54:13 UTC, Jacob Carlborg wrote:

On 2019-07-19 22:16, Max Haughton wrote:

Isn't assert a template (file and line) rather than a plain 
function call?


No. It's a keyword, it's built-in to the compiler. It get extra 
benefits compared to a regular functions: the asserts will be 
removed in release builds.


Thanks for explain.


Re: accuracy of floating point calculations: d vs cpp

2019-07-22 Thread drug via Digitalmars-d-learn

22.07.2019 15:53, rikki cattermole пишет:


https://godbolt.org/z/EtZLG0


hmm, in short - this is my local problem?


Re: accuracy of floating point calculations: d vs cpp

2019-07-22 Thread rikki cattermole via Digitalmars-d-learn

On 23/07/2019 12:49 AM, drug wrote:
I have almost identical (I believe it at least) implementation (D and 
C++) of the same algorithm that uses Kalman filtering. These 
implementations though show different results (least significant 
digits). Before I start investigating I would like to ask if this issue 
(different results of floating points calculation for D and C++) is well 
known? May be I can read something about that in web? Does D 
implementation of floating point types is different than the one of C++?


Most of all I'm interesting in equal results to ease comparing outputs 
of both implementations between each other. The accuracy itself is 
enough in my case, but this difference is annoying in some cases.


https://godbolt.org/z/EtZLG0


accuracy of floating point calculations: d vs cpp

2019-07-22 Thread drug via Digitalmars-d-learn
I have almost identical (I believe it at least) implementation (D and 
C++) of the same algorithm that uses Kalman filtering. These 
implementations though show different results (least significant 
digits). Before I start investigating I would like to ask if this issue 
(different results of floating points calculation for D and C++) is well 
known? May be I can read something about that in web? Does D 
implementation of floating point types is different than the one of C++?


Most of all I'm interesting in equal results to ease comparing outputs 
of both implementations between each other. The accuracy itself is 
enough in my case, but this difference is annoying in some cases.


Re: Is there a way to bypass the file and line into D assert function ?

2019-07-22 Thread Jacob Carlborg via Digitalmars-d-learn

On 2019-07-19 22:16, Max Haughton wrote:


Isn't assert a template (file and line) rather than a plain function call?


No. It's a keyword, it's built-in to the compiler. It get extra benefits 
compared to a regular functions: the asserts will be removed in release 
builds.


--
/Jacob Carlborg


Re: Why is Throwable.TraceInfo.toString not @safe?

2019-07-22 Thread Johannes Loher via Digitalmars-d-learn
Am 22.07.19 um 05:16 schrieb Paul Backus:
> On Sunday, 21 July 2019 at 18:03:33 UTC, Johannes Loher wrote:
>> I'd like to log stacktraces of caught exceptions in an @safe manner.
>> However, Throwable.TraceInfo.toString is not @safe (or @trusted), so
>> this is not possible. Why is it not @safe? Can it be @trusted?
>>
>> Thanks for your help!
> 
> Seems like it's because it uses the form of toString that accepts a
> delegate [1], and that delegate parameter is not marked as @safe.
> 
> [1] https://dlang.org/phobos/object.html#.Throwable.toString.2

I'm not talking about Throwable's toString method, but about
Throwable.TraceInfo's. Throwable.TraceInfo is an Interface inside Throwable:

interface TraceInfo
{
int opApply(scope int delegate(ref const(char[]))) const;
int opApply(scope int delegate(ref size_t, ref const(char[]))) const;
string toString() const;
}

Throwable has a member info of type TraceInfo. It is never explicitly
set, so I assume it is automatically set by runtime magic. This is the
constructor of Throwable:

@nogc @safe pure nothrow this(string msg, Throwable nextInChain = null)
{
this.msg = msg;
this.nextInChain = nextInChain;
//this.info = _d_traceContext();
}

In theory people could define their own exception classes which provide
their own implementation of TraceInfo, but I never heard of anybody
doing this. So the real question is if the toString method of of the
DRuntime implementations (I assume there might be different
implementations for different platforms) are actually @safe (or
@trusted) and why we do not mark the interface to be @safe then.