On 14 May 2016 at 04:16, Walter Bright via Digitalmars-d <digitalmars-d@puremagic.com> wrote: > On 5/12/2016 10:12 PM, Manu via Digitalmars-d wrote: >> >> No. Do not. >> I've worked on systems where the compiler and the runtime don't share >> floating point precisions before, and it was a nightmare. >> One anecdote, the PS2 had a vector coprocessor; it ran reduced (24bit >> iirc?) float precision, code compiled for it used 32bits in the >> compiler... to make it worse, the CPU also ran 32bits. The result was, >> literals/constants, or float data fed from the CPU didn't match data >> calculated by the vector unit at runtime (ie, runtime computation of >> the same calculation that may have occurred at compile time to produce >> some constant didn't match). The result was severe cracking and >> visible/shimmering seams between triangles as sub-pixel alignment >> broke down. >> We struggled with this for years. It was practically impossible to >> solve, and mostly involved workarounds. > > > I understand there are some cases where this is needed, I've proposed > intrinsics for that.
Intrinsics for... what? Making the compiler use the type specified at compile time? Is it true that that's not happening already? I really don't want to use an intrinsic to have float behave like a float at CTFE... nobody will EVER do that. >> I really just want D to use double throughout, like all the cpu's that >> run code today. This 80bit real thing (only on x86 cpu's though!) is a >> never ending pain. > > > It's 128 bits on other CPUs. What? >> This sounds like designing specifically for my problem from above, >> where the frontend is always different than the backend/runtime. >> Please have the frontend behave such that it operates on the precise >> datatype expressed by the type... the backend probably does this too, >> and runtime certainly does; they all match. > > > Except this never happens anyway. Huh? I'm sorry, I didn't follow those points.