So what you really want is: signed array lengths. You only have
a use for the sloppy conversion, because D doesn't have signed
array lengths.
Yes and no. Signed safety is better, but the current behavior
works too.
Honestly I don't care if the bit maniacs want 8 million terabytes
more for t
On 06/30/2017 07:38 AM, Ecstatic Coder wrote:
I'm just against putting it on by default, so that the current behavior
is kept, because I don't see where is the language improvement in having
to put these ugly manual conversion everywhere just because the
string/array length was made unsigned.
On Thursday, 29 June 2017 at 19:12:24 UTC, ag0aep6g wrote:
On Thursday, 29 June 2017 at 18:03:39 UTC, Ecstatic Coder wrote:
I often do code like "x < array.length" where x needs to be a
long to be able to handle negative values.
I want my code to compile without warning, and therefore I'm
aga
On Thursday, 29 June 2017 at 19:12:24 UTC, ag0aep6g wrote:
On Thursday, 29 June 2017 at 18:03:39 UTC, Ecstatic Coder wrote:
I often do code like "x < array.length" where x needs to be a
long to be able to handle negative values.
I want my code to compile without warning, and therefore I'm
aga
On Thursday, 29 June 2017 at 18:03:39 UTC, Ecstatic Coder wrote:
I often do code like "x < array.length" where x needs to be a
long to be able to handle negative values.
I want my code to compile without warning, and therefore I'm
against requiring "x < array.length.to!long()" to remove that
On Monday, 9 May 2016 at 11:16:53 UTC, ZombineDev wrote:
On Monday, 9 May 2016 at 09:10:19 UTC, Walter Bright wrote:
Don Clugston pointed out in his DConf 2016 talk that:
float f = 1.30;
assert(f == 1.30);
will always be false since 1.30 is not representable as a
float. However,
On 14.05.2016 02:49, Timon Gehr wrote:
On 13.05.2016 23:35, Walter Bright wrote:
On 5/13/2016 12:48 PM, Timon Gehr wrote:
IMO the compiler should never be allowed to use a precision different
from the one specified.
I take it you've never been bitten by accumulated errors :-)
...
If that wa
On 22.08.2016 20:26, Joakim wrote:
Sorry, I stopped reading this thread after my last response, as I felt I
was wasting too much time on this discussion, so I didn't read your
response till now.
...
No problem. Would have been fine with me if it stayed that way.
On Saturday, 21 May 2016 at 14
Sorry, I stopped reading this thread after my last response, as I
felt I was wasting too much time on this discussion, so I didn't
read your response till now.
On Saturday, 21 May 2016 at 14:38:20 UTC, Timon Gehr wrote:
On 20.05.2016 13:32, Joakim wrote:
Yet you're the one arguing against incr
On Saturday, 21 May 2016 at 22:05:31 UTC, Timon Gehr wrote:
On 21.05.2016 20:14, Walter Bright wrote:
It's good to list traps for the unwary in FP usage. It's
disingenuous to
list only problems with one design and pretend there are no
traps in
another design.
Some designs are much better tha
On Saturday, 21 May 2016 at 21:56:02 UTC, Walter Bright wrote:
On 5/21/2016 11:36 AM, Tobias M wrote:
Sorry but this is a misrepresentation. I never claimed that
the x87 doesn't
conform to the IEEE standard.
My point was directed to more than just you. Sorry I didn't
make that clear.
The
On 21.05.2016 20:14, Walter Bright wrote:
On 5/21/2016 10:03 AM, Timon Gehr wrote:
Check out section 5 for some convincing examples showing why the x87
is horrible.
The polio vaccine winds up giving a handful of people polio, too.
...
People don't get vaccinated without consent.
It's good
On 5/21/2016 11:36 AM, Tobias M wrote:
Sorry but this is a misrepresentation. I never claimed that the x87 doesn't
conform to the IEEE standard.
My point was directed to more than just you. Sorry I didn't make that clear.
The point is, that is IS possible to provide fairly reasonable and con
Reasons have been alleged. What's your final decision?
On 21.05.2016 19:58, Walter Bright wrote:
On 5/21/2016 2:26 AM, Tobias Müller wrote:
On Friday, 20 May 2016 at 22:22:57 UTC, Walter Bright wrote:
On 5/20/2016 5:36 AM, Tobias M wrote:
Still an authority, though.
If we're going to use the fallacy of appeal to authority, may I
present Kahan
wh
On Saturday, 21 May 2016 at 17:58:49 UTC, Walter Bright wrote:
On 5/21/2016 2:26 AM, Tobias Müller wrote:
On Friday, 20 May 2016 at 22:22:57 UTC, Walter Bright wrote:
On 5/20/2016 5:36 AM, Tobias M wrote:
Still an authority, though.
If we're going to use the fallacy of appeal to authority, m
On 5/21/2016 10:03 AM, Timon Gehr wrote:
Check out section 5 for some convincing examples showing why the x87 is
horrible.
The polio vaccine winds up giving a handful of people polio, too.
It's good to list traps for the unwary in FP usage. It's disingenuous to list
only problems with one de
On 5/21/2016 2:26 AM, Tobias Müller wrote:
On Friday, 20 May 2016 at 22:22:57 UTC, Walter Bright wrote:
On 5/20/2016 5:36 AM, Tobias M wrote:
Still an authority, though.
If we're going to use the fallacy of appeal to authority, may I present Kahan
who concurrently designed the IEEE 754 spec a
On 21.05.2016 15:45, Timon Gehr wrote:
On 21.05.2016 00:22, Walter Bright wrote:
...
may I present
Kahan who concurrently designed the IEEE 754 spec and the x87.
The x87 is by far not the slam-dunk design you seem to make it out to
be. ...
https://hal.archives-ouvertes.fr/hal-00128124v5/doc
On 17.05.2016 20:09, Max Samukha wrote:
On Monday, 16 May 2016 at 19:01:19 UTC, Timon Gehr wrote:
You are not even guaranteed to get the same result on two different x86
implementations.
Without reading the x86 specification, I think it is safe to claim
that you actually are guaranteed to get
On 20.05.2016 14:34, Johan Engelen wrote:
On Thursday, 19 May 2016 at 18:22:48 UTC, Timon Gehr wrote:
dmd -run kahanDemo.d
1000.00
1001.00
1000.00
dmd -m32 -O -run kahanDemo.d
1000.00
1000.00
1000.0
On 20.05.2016 08:25, Walter Bright wrote:
On 5/19/2016 12:49 AM, Max Samukha wrote:
People are trying to get across that, if they wanted to maximize
accuracy, they
would request the most precise type explicitly. D has 'real' for that.
This
thread has shown unequivocally that the semantics you ar
On 20.05.2016 13:32, Joakim wrote:
On Friday, 20 May 2016 at 11:02:45 UTC, Timon Gehr wrote:
On 20.05.2016 11:14, Joakim wrote:
On Thursday, 19 May 2016 at 18:22:48 UTC, Timon Gehr wrote:
On 19.05.2016 08:04, Joakim wrote:
On Wednesday, 18 May 2016 at 17:10:25 UTC, Timon Gehr wrote:
It's not
On 21.05.2016 00:22, Walter Bright wrote:
On 5/20/2016 5:36 AM, Tobias M wrote:
Still an authority, though.
If we're going to use the fallacy of appeal to authority,
Authorities are not always wrong, the fallacy is to argue that they are
right *because* they are authorities. However, in thi
On Tuesday, 17 May 2016 at 21:07:21 UTC, Walter Bright wrote:
[...] why an unusual case of producing a slightly worse answer
trumps the usual case of producing better answers.
'Sometimes worse' is not 'better', but that's not the point, here.
Even if you managed to consistently produce not les
On Friday, 20 May 2016 at 22:22:57 UTC, Walter Bright wrote:
On 5/20/2016 5:36 AM, Tobias M wrote:
Still an authority, though.
If we're going to use the fallacy of appeal to authority, may I
present Kahan who concurrently designed the IEEE 754 spec and
the x87.
Since I'm just in the mood o
On Friday, 20 May 2016 at 22:22:57 UTC, Walter Bright wrote:
On 5/20/2016 5:36 AM, Tobias M wrote:
Still an authority, though.
If we're going to use the fallacy of appeal to authority, may I
present Kahan who concurrently designed the IEEE 754 spec and
the x87.
Actually cited this *because
On Friday, 20 May 2016 at 06:12:44 UTC, Walter Bright wrote:
If you say so. I would like to see an example that
demonstrates that the first
roundToDouble is required.
That's beside the point. If there are spots in the program that
require rounding, what is wrong with having to specify it?
On 5/20/2016 5:36 AM, Tobias M wrote:
Still an authority, though.
If we're going to use the fallacy of appeal to authority, may I present Kahan
who concurrently designed the IEEE 754 spec and the x87.
On Friday, 20 May 2016 at 11:22:48 UTC, Timon Gehr wrote:
On 20.05.2016 08:12, Walter Bright wrote:
I'm curious if you know of any language that meets your
requirements.
(Java 1.0 did, but Sun was forced to abandon that.)
x86_64 assembly language.
Similar discussion for Rust:
https://inter
On Friday, 20 May 2016 at 12:32:40 UTC, Tobias Müller wrote:
Let me cite Prof. John L Gustafson
Not "Prof." but "Dr.", sorry about that. Still an authority,
though.
On Thursday, 19 May 2016 at 18:22:48 UTC, Timon Gehr wrote:
dmd -run kahanDemo.d
1000.00
1001.00
1000.00
dmd -m32 -O -run kahanDemo.d
1000.00
1000.00
1000.00
Better?
Ignore if you think it's not r
On Friday, 20 May 2016 at 06:12:44 UTC, Walter Bright wrote:
On 5/19/2016 1:26 PM, Timon Gehr wrote:
Those two lines producing different results is unexpected,
because you are
explicitly saying that y is a double, and the first line also
does implicit
rounding (probably -- on all compilers and
On Friday, 20 May 2016 at 11:02:45 UTC, Timon Gehr wrote:
On 20.05.2016 11:14, Joakim wrote:
On Thursday, 19 May 2016 at 18:22:48 UTC, Timon Gehr wrote:
On 19.05.2016 08:04, Joakim wrote:
On Wednesday, 18 May 2016 at 17:10:25 UTC, Timon Gehr wrote:
It's not just slightly worse, it can cut the
On 20.05.2016 08:12, Walter Bright wrote:
On 5/19/2016 1:26 PM, Timon Gehr wrote:
Those two lines producing different results is unexpected, because you
are
explicitly saying that y is a double, and the first line also does
implicit
rounding (probably -- on all compilers and targets that will be
On 20.05.2016 11:14, Joakim wrote:
On Thursday, 19 May 2016 at 18:22:48 UTC, Timon Gehr wrote:
On 19.05.2016 08:04, Joakim wrote:
On Wednesday, 18 May 2016 at 17:10:25 UTC, Timon Gehr wrote:
It's not just slightly worse, it can cut the number of useful bits in
half or more! It is not unusual,
On Thursday, 19 May 2016 at 18:22:48 UTC, Timon Gehr wrote:
On 19.05.2016 08:04, Joakim wrote:
On Wednesday, 18 May 2016 at 17:10:25 UTC, Timon Gehr wrote:
It's not just slightly worse, it can cut the number of useful
bits in
half or more! It is not unusual, I have actually run into
those
prob
On 5/19/2016 12:49 AM, Max Samukha wrote:
People are trying to get across that, if they wanted to maximize accuracy, they
would request the most precise type explicitly. D has 'real' for that. This
thread has shown unequivocally that the semantics you are insisting on is bound
to cause endless co
On 5/19/2016 1:26 PM, Timon Gehr wrote:
Those two lines producing different results is unexpected, because you are
explicitly saying that y is a double, and the first line also does implicit
rounding (probably -- on all compilers and targets that will be relevant in the
near future -- to double).
On 5/18/2016 4:30 AM, Ethan Watson wrote:
I appreciate that it sounds like I'm starting to stretch to hold to my point,
but I imagine we'd also be able to control such things with the compiler - or at
least know what flags it uses so that we can ensure consistent behaviour between
compilation and
On 19.05.2016 09:09, Walter Bright wrote:
On 5/18/2016 10:10 AM, Timon Gehr wrote:
double kahan(double[] arr){
double sum = 0.0;
double c = 0.0;
foreach(x;arr){
double y=x-c;
double y = roundToDouble(x - c);
Those two lines producing different results is unexpecte
On 18.05.2016 19:10, Timon Gehr wrote:
implementation-defined behaviour
Maybe that wasn't the right term (it's worse than that; I think the
documentation of the implementation is not even required to tell you
precisely what it does).
On 19.05.2016 08:04, Joakim wrote:
On Wednesday, 18 May 2016 at 17:10:25 UTC, Timon Gehr wrote:
It's not just slightly worse, it can cut the number of useful bits in
half or more! It is not unusual, I have actually run into those
problems in the past, and it can break an algorithm that is in Pho
On Thursday, 19 May 2016 at 11:33:38 UTC, Joakim wrote:
Computer scientists are no good if they don't know any science.
Even the computer scientists that does not know any science are
infinitely better than those who refuse to read papers and debate
on a rational level.
Blind D zealotry at
On Thursday, 19 May 2016 at 12:00:33 UTC, Joseph Rushton Wakeling
wrote:
On Thursday, 19 May 2016 at 11:33:38 UTC, Joakim wrote:
The example he refers to is laughable because it also checks
for equality.
With good reason, because it's intended to illustrate the point
that two calculations tha
On Thursday, 19 May 2016 at 11:33:38 UTC, Joakim wrote:
The example he refers to is laughable because it also checks
for equality.
With good reason, because it's intended to illustrate the point
that two calculations that _look_ identical in code, that
intuitively should produce identical res
On Thursday, 19 May 2016 at 11:00:31 UTC, Ola Fosheim Grøstad
wrote:
On Thursday, 19 May 2016 at 08:37:55 UTC, Joakim wrote:
On Thursday, 19 May 2016 at 08:28:22 UTC, Ola Fosheim Grøstad
wrote:
On Thursday, 19 May 2016 at 06:04:15 UTC, Joakim wrote:
In this case, not increasing precision gets t
On Thursday, 19 May 2016 at 08:37:55 UTC, Joakim wrote:
On Thursday, 19 May 2016 at 08:28:22 UTC, Ola Fosheim Grøstad
wrote:
On Thursday, 19 May 2016 at 06:04:15 UTC, Joakim wrote:
In this case, not increasing precision gets the more accurate
result, but other examples could be constructed that
On Thursday, 19 May 2016 at 08:28:22 UTC, Ola Fosheim Grøstad
wrote:
On Thursday, 19 May 2016 at 06:04:15 UTC, Joakim wrote:
In this case, not increasing precision gets the more accurate
result, but other examples could be constructed that _heavily_
favor increasing precision. In fact, almost
On Thursday, 19 May 2016 at 06:04:15 UTC, Joakim wrote:
In this case, not increasing precision gets the more accurate
result, but other examples could be constructed that _heavily_
favor increasing precision. In fact, almost any real-world,
non-toy calculation would favor it.
Please stop say
On Wednesday, 18 May 2016 at 22:16:44 UTC, jmh530 wrote:
On Wednesday, 18 May 2016 at 21:49:34 UTC, Joseph Rushton
Wakeling wrote:
On Wednesday, 18 May 2016 at 20:29:27 UTC, Walter Bright wrote:
I do not understand the tolerance for bad results in
scientific, engineering, medical, or finance ap
On Thursday, 19 May 2016 at 07:09:30 UTC, Walter Bright wrote:
On 5/18/2016 10:10 AM, Timon Gehr wrote:
double kahan(double[] arr){
double sum = 0.0;
double c = 0.0;
foreach(x;arr){
double y=x-c;
double y = roundToDouble(x - c);
double t=sum+y;
On 5/18/2016 10:10 AM, Timon Gehr wrote:
double kahan(double[] arr){
double sum = 0.0;
double c = 0.0;
foreach(x;arr){
double y=x-c;
double y = roundToDouble(x - c);
double t=sum+y;
double t = roundToDouble(sum + y);
c = (t-sum)-y;
On Wednesday, 18 May 2016 at 17:10:25 UTC, Timon Gehr wrote:
It's not just slightly worse, it can cut the number of useful
bits in half or more! It is not unusual, I have actually run
into those problems in the past, and it can break an algorithm
that is in Phobos today!
I wouldn't call that
On Wed, May 18, 2016 at 04:28:13PM -0700, Walter Bright via Digitalmars-d wrote:
> On 5/18/2016 4:17 PM, Joseph Rushton Wakeling wrote:
> > On Wednesday, 18 May 2016 at 23:09:28 UTC, Walter Bright wrote:
> > > Now try the square root of 2. Or pi, e, etc. The irrational
> > > numbers are, by definit
On Wed, May 18, 2016 at 04:09:28PM -0700, Walter Bright via Digitalmars-d wrote:
[...]
> Now try the square root of 2. Or pi, e, etc. The irrational numbers
> are, by definition, not representable as a ratio.
This is somewhat tangential, but in certain applications it is perfectly
possible to repr
On 5/18/2016 4:17 PM, Joseph Rushton Wakeling wrote:
On Wednesday, 18 May 2016 at 23:09:28 UTC, Walter Bright wrote:
Now try the square root of 2. Or pi, e, etc. The irrational numbers are, by
definition, not representable as a ratio.
Continued fraction? :-)
Somehow I don't think gcc is usin
On Wednesday, 18 May 2016 at 23:09:28 UTC, Walter Bright wrote:
Now try the square root of 2. Or pi, e, etc. The irrational
numbers are, by definition, not representable as a ratio.
Continued fraction? :-)
On 5/18/2016 1:22 PM, deadalnix wrote:
On Wednesday, 18 May 2016 at 20:14:22 UTC, Walter Bright wrote:
On 5/18/2016 4:48 AM, deadalnix wrote:
Typo: arbitrary precision FP. Meaning some soft float that grows as big as
necessary to not lose precision à la BitInt but for floats.
0.10 is not repr
On Wednesday, 18 May 2016 at 22:06:43 UTC, Era Scarecrow wrote:
On Wednesday, 18 May 2016 at 21:02:03 UTC, tsbockman wrote:
Can you give me a source for this, or at least the name of the
relevant op code? (I'm new to x86 assembly.)
http://www.mathemainzel.info/files/x86asmref.html#mul
http://
On Wednesday, 18 May 2016 at 21:49:34 UTC, Joseph Rushton
Wakeling wrote:
On Wednesday, 18 May 2016 at 20:29:27 UTC, Walter Bright wrote:
I do not understand the tolerance for bad results in
scientific, engineering, medical, or finance applications.
I don't think anyone has suggested tolerance
On Wednesday, 18 May 2016 at 21:02:03 UTC, tsbockman wrote:
On Wednesday, 18 May 2016 at 19:53:10 UTC, Era Scarecrow wrote:
On Wednesday, 18 May 2016 at 19:36:59 UTC, tsbockman wrote:
I agree that intrinsics for this would be nice. I doubt that
any current D platform is actually computing the f
On Wednesday, 18 May 2016 at 20:29:27 UTC, Walter Bright wrote:
I do not understand the tolerance for bad results in
scientific, engineering, medical, or finance applications.
I don't think anyone has suggested tolerance for bad results in
any of those applications.
What _has_ been argued fo
On Wednesday, 18 May 2016 at 19:53:10 UTC, Era Scarecrow wrote:
On Wednesday, 18 May 2016 at 19:36:59 UTC, tsbockman wrote:
I agree that intrinsics for this would be nice. I doubt that
any current D platform is actually computing the full 128 bit
result for every 64 bit multiply though - that w
On 5/18/2016 4:27 AM, Manu via Digitalmars-d wrote:
The comparison was a 24bit fpu doing runtime work but where some
constant input data was calculated with a separate 32bit fpu. The
particulars were not ever intended to be relevant to the conversation,
except the fact that 2 differently precisio
On Wednesday, 18 May 2016 at 19:30:12 UTC, deadalnix wrote:
I'm confused as to why the compiler would be using soft floats
instead of hard floats.
Cross compilation.
Ah, looking back on the discussion, I see the comments about
cross compilation and soft floats. Making more sense now...
S
On Wednesday, 18 May 2016 at 20:14:22 UTC, Walter Bright wrote:
On 5/18/2016 4:48 AM, deadalnix wrote:
Typo: arbitrary precision FP. Meaning some soft float that
grows as big as
necessary to not lose precision à la BitInt but for floats.
0.10 is not representable in a binary format regardless
On 5/18/2016 4:48 AM, deadalnix wrote:
Typo: arbitrary precision FP. Meaning some soft float that grows as big as
necessary to not lose precision à la BitInt but for floats.
0.10 is not representable in a binary format regardless of precision.
On Wednesday, 18 May 2016 at 19:36:59 UTC, tsbockman wrote:
I agree that intrinsics for this would be nice. I doubt that
any current D platform is actually computing the full 128 bit
result for every 64 bit multiply though - that would waste both
power and performance, for most programs.
Exc
On Wednesday, 18 May 2016 at 11:46:37 UTC, Era Scarecrow wrote:
On Wednesday, 18 May 2016 at 10:25:10 UTC, tsbockman wrote:
https://code.dlang.org/packages/checkedint
https://dlang.org/phobos/core_checkedint.html
Glancing at the checkedInt I really don't see it as being the
same as what I'm
On Wednesday, 18 May 2016 at 19:20:20 UTC, jmh530 wrote:
On Wednesday, 18 May 2016 at 12:39:21 UTC, Johannes Pfau wrote:
Do you have a link explaining GCC actually uses such a soft
float?
I'm confused as to why the compiler would be using soft floats
instead of hard floats.
Cross compilat
On Wednesday, 18 May 2016 at 12:39:21 UTC, Johannes Pfau wrote:
Do you have a link explaining GCC actually uses such a soft
float?
I'm confused as to why the compiler would be using soft floats
instead of hard floats.
On 17.05.2016 21:31, deadalnix wrote:
On Tuesday, 17 May 2016 at 18:08:47 UTC, Timon Gehr wrote:
Right. Hence, the 80-bit CTFE results have to be converted to the
final precision at some point in order to commence the runtime
computation. This means that additional rounding happens, which was
no
I had written and sent this message three days ago, but it seemingly
never showed up on the newsgroup. I'm sorry if it seemed that I didn't
explain myself, I was operating under the assumption that this message
had been made available to you.
On 14.05.2016 03:26, Walter Bright wrote:
> On 5/1
On 17.05.2016 23:07, Walter Bright wrote:
On 5/17/2016 11:08 AM, Timon Gehr wrote:
Right. Hence, the 80-bit CTFE results have to be converted to the final
precision at some point in order to commence the runtime computation.
This means
that additional rounding happens, which was not present in t
On Wednesday, 18 May 2016 at 15:42:56 UTC, Joakim wrote:
I see, so the fact that both the C++ and D specs say the same
thing doesn't matter, and the fact that D also has the const
float in your example as single-precision at runtime, contrary
to your claims, none of that matters.
D doesn't ev
On Wednesday, 18 May 2016 at 15:30:42 UTC, Matthias Bentrup wrote:
On Wednesday, 18 May 2016 at 14:29:42 UTC, Ola Fosheim Grøstad
wrote:
On Wednesday, 18 May 2016 at 12:27:38 UTC, Ola Fosheim Grøstad
wrote:
And yes, half-precision is only 10 bits.
Actually, it turns out that the mantissa is 1
On Wednesday, 18 May 2016 at 12:27:38 UTC, Ola Fosheim Grøstad
wrote:
On Wednesday, 18 May 2016 at 11:16:44 UTC, Joakim wrote:
Welcome to the wonderful world of C++! :D
More seriously, it is well-defined for that implementation,
you did not raise the issue of the spec till now. In fact,
you
On Wednesday, 18 May 2016 at 14:29:42 UTC, Ola Fosheim Grøstad
wrote:
On Wednesday, 18 May 2016 at 12:27:38 UTC, Ola Fosheim Grøstad
wrote:
And yes, half-precision is only 10 bits.
Actually, it turns out that the mantissa is 11 bits. So it
clearly plays louder than other floats. ;-)
The man
On Wednesday, 18 May 2016 at 12:27:38 UTC, Ola Fosheim Grøstad
wrote:
And yes, half-precision is only 10 bits.
Actually, it turns out that the mantissa is 11 bits. So it
clearly plays louder than other floats. ;-)
On Wednesday, 18 May 2016 at 12:39:21 UTC, Johannes Pfau wrote:
Do you have a link explaining GCC actually uses such a soft
float? For example
https://github.com/gcc-mirror/gcc/blob/master/gcc/fold-const.c#L20 still says "This file should be rewritten to use an arbitrary precision..."
Alright,
Am Wed, 18 May 2016 11:48:49 +
schrieb deadalnix :
> On Wednesday, 18 May 2016 at 11:11:08 UTC, Walter Bright wrote:
> > On 5/18/2016 3:15 AM, deadalnix wrote:
> >> On Wednesday, 18 May 2016 at 08:21:18 UTC, Walter Bright wrote:
> >>> Trying to make D behave exactly like various C++ compil
On Wednesday, 18 May 2016 at 11:16:44 UTC, Joakim wrote:
Welcome to the wonderful world of C++! :D
More seriously, it is well-defined for that implementation, you
did not raise the issue of the spec till now. In fact, you
seemed not to care what the specs say.
Eh? All C/C++ compilers I have
On 18 May 2016 at 21:53, ixid via Digitalmars-d
wrote:
> On Wednesday, 18 May 2016 at 11:38:23 UTC, Manu wrote:
>>
>> That's precisely the suggestion; that compile time execution of a
>> given type mirror the runtime, that is, matching precisions in this
>> case.
>> ...within reason; as Walter has
On Wednesday, 18 May 2016 at 11:38:23 UTC, Manu wrote:
That's precisely the suggestion; that compile time execution of
a
given type mirror the runtime, that is, matching precisions in
this
case.
...within reason; as Walter has pointed out consistently, it's
very
difficult to be PERFECT for all
On Wednesday, 18 May 2016 at 11:11:08 UTC, Walter Bright wrote:
On 5/18/2016 3:15 AM, deadalnix wrote:
On Wednesday, 18 May 2016 at 08:21:18 UTC, Walter Bright wrote:
Trying to make D behave exactly like various C++ compilers
do, with all their
semi-documented behavior and semi-documented switc
On Wednesday, 18 May 2016 at 10:25:10 UTC, tsbockman wrote:
On Wednesday, 18 May 2016 at 08:38:07 UTC, Era Scarecrow wrote:
try {}// Considers the result of 1 line of basic math to
be caught by:
carry {} //only activates if carry is set
overflow {} //if overflowed during some math
Am Wed, 18 May 2016 04:11:08 -0700
schrieb Walter Bright :
> On 5/18/2016 3:15 AM, deadalnix wrote:
> > On Wednesday, 18 May 2016 at 08:21:18 UTC, Walter Bright wrote:
> >> Trying to make D behave exactly like various C++ compilers do,
> >> with all their semi-documented behavior and semi-docume
On Wednesday, 18 May 2016 at 11:12:16 UTC, Joseph Rushton
Wakeling wrote:
I'm not sure that the `const float` vs `float` is the
difference per se. The difference is that in the examples
you've given, the `const float` is being determined (and used)
at compile time.
They both have to be deter
On 18 May 2016 at 21:28, ixid via Digitalmars-d
wrote:
> On Wednesday, 18 May 2016 at 08:55:03 UTC, Walter Bright wrote:
>>
>> On 5/18/2016 1:30 AM, Ethan Watson wrote:
You're also asking for a mode where the compiler for one machine is
supposed
to behave like hand-coded assemb
On Wednesday, 18 May 2016 at 11:17:14 UTC, Walter Bright wrote:
Again, even if the precision matches, the rounding will NOT
match, and you will get different results randomly dependent on
the exact operand values.
We've already been burned by middlewares/APIS toggling MMX flags
on and off and
On Wednesday, 18 May 2016 at 08:55:03 UTC, Walter Bright wrote:
On 5/18/2016 1:30 AM, Ethan Watson wrote:
You're also asking for a mode where the compiler for one
machine is supposed
to behave like hand-coded assembler for another machine with
a different
instruction set.
Actually, I'm askin
On 18 May 2016 at 18:21, Walter Bright via Digitalmars-d
wrote:
> On 5/18/2016 12:56 AM, Ethan Watson wrote:
>>
>> > In any case, the problem Manu was having was with C++.
>> VU code was all assembly, I don't believe there was a C/C++ compiler for
>> it.
>
>
> The constant folding part was where,
On 5/18/2016 3:46 AM, Ola Fosheim Grøstad wrote:
On Wednesday, 18 May 2016 at 09:13:35 UTC, Iain Buclaw wrote:
Can you back that up statistically? Try running this same operation 600
million times plot a graph for the result from each run for it so we can get
an idea of just how random or arbit
On 5/18/2016 2:54 AM, Ethan Watson wrote:
On Wednesday, 18 May 2016 at 08:55:03 UTC, Walter Bright wrote:
MSVC doesn't appear to have a switch that does what you ask for
I'm still not entirely sure what the /fp switch does for x64 builds. The
documentation is not clear in the slightest and I h
On Wednesday, 18 May 2016 at 09:21:30 UTC, Ola Fosheim Grøstad
wrote:
On Wednesday, 18 May 2016 at 07:21:30 UTC, Joakim wrote:
On Wednesday, 18 May 2016 at 05:49:16 UTC, Ola Fosheim Grøstad
wrote:
On Wednesday, 18 May 2016 at 03:01:14 UTC, Joakim wrote:
There is nothing "random" about increasin
On Wednesday, 18 May 2016 at 09:21:30 UTC, Ola Fosheim Grøstad
wrote:
No. The "const float y" will not be coerced to 32 bit, but the
"float y" will be coerced to 32 bit. So you get two different y
values. (On a specific compiler, i.e. DMD.)
I'm not sure that the `const float` vs `float` is the
On 5/18/2016 3:15 AM, deadalnix wrote:
On Wednesday, 18 May 2016 at 08:21:18 UTC, Walter Bright wrote:
Trying to make D behave exactly like various C++ compilers do, with all their
semi-documented behavior and semi-documented switches that affect constant
folding behavior, is a hopeless task.
I
On Wednesday, 18 May 2016 at 09:13:35 UTC, Iain Buclaw wrote:
Can you back that up statistically? Try running this same
operation 600 million times plot a graph for the result from
each run for it so we can get an idea of just how random or
arbitrary it really is.
Huh? This isn't about stati
On Wednesday, 18 May 2016 at 08:38:07 UTC, Era Scarecrow wrote:
try {}// Considers the result of 1 line of basic math to
be caught by:
carry {} //only activates if carry is set
overflow {} //if overflowed during some math
modulus(m){} //get the remainder as m after a division
opera
1 - 100 of 366 matches
Mail list logo