Re: floats default to NaN... why?

2012-06-09 Thread Minas

With
ints, the best we can do is 0. With floats, NaN makes it better.


With the logic that NaN is the default for floats, 0 is a very 
bad choice for ints. It the worst we could do. Altough I 
understand that setting it to something else like -infinity is 
still not a good choice.


I think that if D wants people to initialize their variables, it 
should generate a compiler error when not doing so, like C# and 
Java. For me, having floats defaulting to NaN and ints to zero is 
somewhere in the middle... Which isn't good.


The current solution is not good for me (I still love D though).




Re: floats default to NaN... why?

2012-06-09 Thread Kevin
On 09/06/12 14:42, Minas wrote:
 With
 ints, the best we can do is 0. With floats, NaN makes it better.

 With the logic that NaN is the default for floats, 0 is a very bad
 choice for ints. It the worst we could do. Altough I understand that
 setting it to something else like -infinity is still not a good choice.
Is it just me but do ints not have infinity values?  I think ints should
default to about half of their capacity (probably negative for signed). 
This way you are unlikely to get an off-by-one for an uninitialized values.

 I think that if D wants people to initialize their variables, it
 should generate a compiler error when not doing so, like C# and Java.
 For me, having floats defaulting to NaN and ints to zero is somewhere
 in the middle... Which isn't good.
I 100% agree.




signature.asc
Description: OpenPGP digital signature


Re: floats default to NaN... why?

2012-06-09 Thread Jerome BENOIT



On 09/06/12 20:48, Kevin wrote:

On 09/06/12 14:42, Minas wrote:

With
ints, the best we can do is 0. With floats, NaN makes it better.


With the logic that NaN is the default for floats, 0 is a very bad
choice for ints. It the worst we could do. Altough I understand that
setting it to something else like -infinity is still not a good choice.

Is it just me but do ints not have infinity values?


in Mathematics yes, but not in D.

 I think ints should

default to about half of their capacity (probably negative for signed).


This would be machine depends, as such it should be avoided.


This way you are unlikely to get an off-by-one for an uninitialized values.


something as a Not an Integer NaI should be better.




I think that if D wants people to initialize their variables, it
should generate a compiler error when not doing so, like C# and Java.
For me, having floats defaulting to NaN and ints to zero is somewhere
in the middle... Which isn't good.

I 100% agree.



Re: floats default to NaN... why?

2012-06-09 Thread Kevin
On Sat 09 Jun 2012 14:59:21 EDT, Jerome BENOIT wrote:


 On 09/06/12 20:48, Kevin wrote:
 On 09/06/12 14:42, Minas wrote:
 With
 ints, the best we can do is 0. With floats, NaN makes it better.

 With the logic that NaN is the default for floats, 0 is a very bad
 choice for ints. It the worst we could do. Altough I understand that
 setting it to something else like -infinity is still not a good choice.
 Is it just me but do ints not have infinity values?

 in Mathematics yes, but not in D.

  I think ints should
 default to about half of their capacity (probably negative for signed).

 This would be machine depends, as such it should be avoided.

 This way you are unlikely to get an off-by-one for an uninitialized
 values.

 something as a Not an Integer NaI should be better.

I just don't think it is a good idea to add more metadata to ints.



Re: floats default to NaN... why?

2012-06-09 Thread Jerome BENOIT



On 09/06/12 20:42, Minas wrote:

With
ints, the best we can do is 0. With floats, NaN makes it better.


With the logic that NaN is the default for floats, 0 is a very bad choice for 
ints. It the worst we could do. Altough I understand that setting it to 
something else like -infinity is still not a good choice.



Do you ave in mind something as NaI (Nat an Integer) ?


I think that if D wants people to initialize their variables, it should 
generate a compiler error when not doing so, like C# and Java. For me, having 
floats defaulting to NaN and ints to zero is somewhere in the middle... Which 
isn't good.

The current solution is not good for me (I still love D though).




Jerome


Re: floats default to NaN... why?

2012-06-09 Thread Andrew Wiley
On Sat, Jun 9, 2012 at 4:53 PM, Andrew Wiley wiley.andre...@gmail.com wrote:

 On Sat, Jun 9, 2012 at 11:57 AM, Kevin kevincox...@gmail.com wrote:

 On Sat 09 Jun 2012 14:59:21 EDT, Jerome BENOIT wrote:
 
 
  On 09/06/12 20:48, Kevin wrote:
  On 09/06/12 14:42, Minas wrote:
  With
  ints, the best we can do is 0. With floats, NaN makes it better.
 
  With the logic that NaN is the default for floats, 0 is a very bad
  choice for ints. It the worst we could do. Altough I understand that
  setting it to something else like -infinity is still not a good
  choice.
  Is it just me but do ints not have infinity values?
 
  in Mathematics yes, but not in D.
 
   I think ints should
  default to about half of their capacity (probably negative for
  signed).
 
  This would be machine depends, as such it should be avoided.
 
  This way you are unlikely to get an off-by-one for an uninitialized
  values.
 
  something as a Not an Integer NaI should be better.

 I just don't think it is a good idea to add more metadata to ints.


 I agree. With floats, NaN is implemented in hardware. The compiler doesn't
 have to check for NaN when emitting assembly, it just emits operations
 normally and the hardware handles NaN like you'd expect.
 If we tried to add a NaN-like value for integers, we would have to check
 for it before performing integer math. Even with value range propagation, I
 think that would injure integer math performance significantly.

Crap, my apologies for responding with HTML.


Re: floats default to NaN... why?

2012-06-09 Thread Jerome BENOIT

Hello:

On 10/06/12 01:57, Andrew Wiley wrote:

On Sat, Jun 9, 2012 at 4:53 PM, Andrew Wileywiley.andre...@gmail.com  wrote:


On Sat, Jun 9, 2012 at 11:57 AM, Kevinkevincox...@gmail.com  wrote:


On Sat 09 Jun 2012 14:59:21 EDT, Jerome BENOIT wrote:



On 09/06/12 20:48, Kevin wrote:

On 09/06/12 14:42, Minas wrote:

With
ints, the best we can do is 0. With floats, NaN makes it better.


With the logic that NaN is the default for floats, 0 is a very bad
choice for ints. It the worst we could do. Altough I understand that
setting it to something else like -infinity is still not a good
choice.

Is it just me but do ints not have infinity values?


in Mathematics yes, but not in D.

  I think ints should

default to about half of their capacity (probably negative for
signed).


This would be machine depends, as such it should be avoided.


This way you are unlikely to get an off-by-one for an uninitialized
values.


something as a Not an Integer NaI should be better.


I just don't think it is a good idea to add more metadata to ints.



I agree. With floats, NaN is implemented in hardware. The compiler doesn't
have to check for NaN when emitting assembly, it just emits operations
normally and the hardware handles NaN like you'd expect.
If we tried to add a NaN-like value for integers, we would have to check
for it before performing integer math. Even with value range propagation, I
think that would injure integer math performance significantly.


I see. So the alternative, to get a kind of NaN effect, would be to set integers
to their hardware extremum (INT_MAX,SIZE_MAX,...). But this option is hardware
dependent, so zero as default for integers sounds the best option.

Jerome



Crap, my apologies for responding with HTML.


Re: floats default to NaN... why?

2012-06-09 Thread Jonathan M Davis
On Sunday, June 10, 2012 02:32:18 Jerome BENOIT wrote:
 I see. So the alternative, to get a kind of NaN effect, would be to set
 integers to their hardware extremum (INT_MAX,SIZE_MAX,...). But this option
 is hardware dependent, so zero as default for integers sounds the best
 option.

??? All integral types in D have fixed sizes. It's _not_ system or hardware 
dependent. There's no reason why would couldn't have defaulted int to int.max, 
long to long.max, etc. It would have been the same on all systems.

size_t does vary across systems, because it's an alias (uint on 32-bit, ulong 
on 64-bit), but that's pretty much the only integral type which varies.

There's no hardware stuff going on there like you get with NaN, so you wouldn't 
get stuff like dividing by zero results in int.max with ints. That would be the 
same as always if int defaulted to int.max - it's just the init value which 
would change. NaN can do more, because it has hardware support and floating 
point values are just plain different from integral values. But we could 
certainly have made integral values default to their max without costing 
performance.

Regardless, for better or worse, 0 was chosen as the init value for all of the 
integral types, and it would break tons of code to change it now. So, it's 
never going to change.

- Jonathan M Davis


Re: floats default to NaN... why?

2012-06-09 Thread Jerome BENOIT



On 10/06/12 02:49, Jonathan M Davis wrote:

On Sunday, June 10, 2012 02:32:18 Jerome BENOIT wrote:

I see. So the alternative, to get a kind of NaN effect, would be to set
integers to their hardware extremum (INT_MAX,SIZE_MAX,...). But this option
is hardware dependent, so zero as default for integers sounds the best
option.


??? All integral types in D have fixed sizes.


You are right, what I wrote is a non sense:
I am getting tired.

Sorry forthe mistake, and the noise,
Jerome


Re: floats default to NaN... why?

2012-06-09 Thread Jerome BENOIT



On 10/06/12 02:49, Jonathan M Davis wrote:

On Sunday, June 10, 2012 02:32:18 Jerome BENOIT wrote:

I see. So the alternative, to get a kind of NaN effect, would be to set
integers to their hardware extremum (INT_MAX,SIZE_MAX,...). But this option
is hardware dependent, so zero as default for integers sounds the best
option.


??? All integral types in D have fixed sizes.


Sorry, I forgot this part.


Jerome


Re: floats default to NaN... why?

2012-06-08 Thread Minas
The idea isn't being practical exactly.  The idea was to use 
invalid
values as defaults. Unfortunately things like ints don't have 
invalid
values, so they chose zero.  The idea is to make people 
initialize their

variables.


I understand the logic, but I still disagree. No offense :)
I don't like inconsistency though (that ints are 
zero-initialized).




Re: floats default to NaN... why?

2012-06-08 Thread Andrew Wiley
On Thu, Jun 7, 2012 at 6:50 PM, Minas minas_mina1...@hotmail.co.uk wrote:

 I agree that the default value for floats/doubles should be zero. It feels
 much more natural.


The point is to make sure code is correct. Initializing your variables
should be what feels natural. Leaving then uninitialized is bad style, bad
practice, and generally a bad idea. If getting stung by a float
initializing to NaN is what it takes to make that happen, so be it.


 I think the problem here is that people are thinking about some stuff too
 much. D is a rather new language that wants to be practical. Floats
 defaulting to NaN is NOT practical FOR MOST PEOPLE when at the same time I
 write:


Actually, as far as I know, floats have been initialized to NaN since D
first appeared in 2001. This isn't news.




 int sum;

 for(...)
  sum += blah blah blah

 And it works.

In any other systems language, this would be undefined behavior. D is
simply trying to make initialization bugs as obvious as possible. With
ints, the best we can do is 0. With floats, NaN makes it better.


Re: floats default to NaN... why?

2012-06-07 Thread Minas
I agree that the default value for floats/doubles should be zero. 
It feels much more natural.


I think the problem here is that people are thinking about some 
stuff too much. D is a rather new language that wants to be 
practical. Floats defaulting to NaN is NOT practical FOR MOST 
PEOPLE when at the same time I write:


int sum;

for(...)
  sum += blah blah blah

And it works.

Having floats deaulting to a value that's un-natural for most 
people is, in my opinion, craziness. Even if that's more 
correct in a mathematical sense.


Please excuse my English.


Re: floats default to NaN... why?

2012-06-07 Thread Kevin Cox
On Jun 7, 2012 9:53 PM, Minas minas_mina1...@hotmail.co.uk wrote:

 I agree that the default value for floats/doubles should be zero. It
feels much more natural.

 I think the problem here is that people are thinking about some stuff too
much. D is a rather new language that wants to be practical. Floats
defaulting to NaN is NOT practical FOR MOST PEOPLE when at the same time I
write:

 int sum;

 for(...)
  sum += blah blah blah

 And it works.

 Having floats deaulting to a value that's un-natural for most people is,
in my opinion, craziness. Even if that's more correct in a mathematical
sense.

 Please excuse my English.

The idea isn't being practical exactly.  The idea was to use invalid
values as defaults. Unfortunately things like ints don't have invalid
values, so they chose zero.  The idea is to make people initialize their
variables.  It would have been better to pick 19472937 as the int default
because not many people will use that value.

The code you showed would be considered **bad** because you did not
initialize your variables, however ints defaulting to zero is well defined
so it isn't really a big deal.

With floats there is this wonderful value that ensures that nothing
reasonable comes out of a bad calculation, NaN.  Therefore this is used as
a default because if you forget to initialize your vars it will jump out at
you.

This is the thought that was used and many people don't agree.  It is
**very** unlikely to be changed now but __maybe__ D3 however far off that
is.  I hope not though, I hope ints default to 8472927 instead, but
everyone has different opinions.


Re: floats default to NaN... why?

2012-06-07 Thread Jerome BENOIT

hello List:

On 08/06/12 04:04, Kevin Cox wrote:


On Jun 7, 2012 9:53 PM, Minas minas_mina1...@hotmail.co.uk 
mailto:minas_mina1...@hotmail.co.uk wrote:
 
  I agree that the default value for floats/doubles should be zero. It feels 
much more natural.


This highly depends on your perspective:
for numerical folks, NaN is _the_ natural default.


 
  I think the problem here is that people are thinking about some stuff too 
much. D is a rather new language that wants to be practical. Floats defaulting to 
NaN is NOT practical FOR MOST PEOPLE when at the same time I write:
 
  int sum;
 
  for(...)
   sum += blah blah blah
 
  And it works.
 
  Having floats deaulting to a value that's un-natural for most people is, in my 
opinion, craziness. Even if that's more correct in a mathematical sense.
 


The ``most people'' argument in science and technical stuff is a very weak 
argument.


  Please excuse my English.

The idea isn't being practical exactly.  The idea was to use invalid values 
as defaults. Unfortunately things like ints don't have invalid values, so they chose 
zero.  The idea is to make people initialize their variables.  It would have been better 
to pick 19472937 as the int default because not many people will use that value.

The code you showed would be considered **bad** because you did not initialize 
your variables, however ints defaulting to zero is well defined so it isn't 
really a big deal.

With floats there is this wonderful value that ensures that nothing reasonable 
comes out of a bad calculation, NaN.  Therefore this is used as a default 
because if you forget to initialize your vars it will jump out at you.

This is the thought that was used and many people don't agree.  It is **very** 
unlikely to be changed now but __maybe__ D3 however far off that is.  I hope 
not though, I hope ints default to 8472927 instead, but everyone has different 
opinions.


I am totally agree.
I would step further by setting up as default for integers +infinite.



Re: floats default to NaN... why?

2012-06-06 Thread ixid
People may not have voiced their dislike but I'm sure quite a few 
don't like it, it felt jarringly wrong to me. Zero is a better 
default for the consistency and usefulness, expecting a default 
to cause things to keel over as a justification isn't a good one, 
or not as strong as the consistency of zero.





Re: floats default to NaN... why?

2012-06-05 Thread Don Clugston

On 14/04/12 16:52, F i L wrote:

On Saturday, 14 April 2012 at 10:38:45 UTC, Silveri wrote:

On Saturday, 14 April 2012 at 07:52:51 UTC, F i L wrote:

On Saturday, 14 April 2012 at 06:43:11 UTC, Manfred Nowak wrote:

F i L wrote:

4) use hardware signalling to overcome some of the limitations
impressed by 3).


4) I have no idea what you just said... :)


On Saturday, 14 April 2012 at 07:58:44 UTC, F i L wrote:

That's interesting, but what effect does appending an invalid char to
a valid one have? Does the resulting string end up being NaS (Not a
String)? Cause if not, I'm not sure that's a fair comparison.


The initialization values chosen are also determined by the underlying
hardware implementation of the type. Signalling NANs
(http://en.wikipedia.org/wiki/NaN#Signaling_NaN) can be used with
floats because they are implemented by the CPU, but in the case of
integers or strings their aren't really equivalent values.


I'm sure the hardware can just as easily signal zeros.


It can't.


Re: floats default to NaN... why?

2012-04-16 Thread Jerome BENOIT



On 16/04/12 04:38, F i L wrote:

Of course FP numbers are meant for coders... they're in a programming language. 
They are used by coders, and not every coder that uses FP math *has* to be well 
trained in the finer points of mathematics simply to use a number that can 
represent fractions in a conceptually practical way.


The above is not finer points, but basic ones.
Otherwise, float and double are rather integers than by fractions.


I don't understand what you wrote. Typo?


Typo:
float and double are rather represented by integers than by fractions.

Jerome


Re: floats default to NaN... why?

2012-04-15 Thread F i L

Forums are messing up, so I'll try and respond in sections.

/test


Re: floats default to NaN... why?

2012-04-15 Thread F i L
Actually, all of this discussion has made me think that having a 
compiler flag to change FP values to zero as default would be a 
good idea.


Basically my opinion is largely influenced by a couple things. 
That is:


- I believe a lot of good programmers are used to using zero for 
default. Winning them over is a good thing for everyone here. I'm 
not trying to blow this issue out of proportion, I understand 
this isn't all that big a deal, but papercuts do count.


- My major project in D is a game engine. I want D to not only be 
used for the engine, but also to serve as the scripting 
language as well. Thus, I want D knowledge prerequisite to be as 
low as possible. I still think that 0 should be used as default 
in D, however, I'd be more than happy to have a compiler flag for 
this as well.




I've been wanting to try and contribute to D for awhile now. I've 
looked through the source; haven't coded in C++ in a long time, 
and never professionally, but I should be able to tackle 
something like adding a compiler flag to default FP variables to 
zero. If I write the code, would anyone object to having a flag 
for this?


Re: floats default to NaN... why?

2012-04-15 Thread bearophile

F i L:

I should be able to tackle something like adding a compiler 
flag to default FP variables to zero. If I write the code, 
would anyone object to having a flag for this?


I strongly doubt Walter  Andrei will accept this in the main DMD 
trunk.


Bye,
bearophile


Re: floats default to NaN... why?

2012-04-15 Thread F i L

On Monday, 16 April 2012 at 03:25:15 UTC, bearophile wrote:

F i L:

I should be able to tackle something like adding a compiler 
flag to default FP variables to zero. If I write the code, 
would anyone object to having a flag for this?


I strongly doubt Walter  Andrei will accept this in the main 
DMD trunk.


Do you have an idea as the reason? To specific/insignificant an 
issue to justify a compiler flag? They don't like new 
contributors?


I'll wait for a definite yes or no from one of them before I 
approach this.


Re: floats default to NaN... why?

2012-04-15 Thread Ary Manzana

On 4/16/12 12:00 PM, F i L wrote:

On Monday, 16 April 2012 at 03:25:15 UTC, bearophile wrote:

F i L:


I should be able to tackle something like adding a compiler flag to
default FP variables to zero. If I write the code, would anyone
object to having a flag for this?


I strongly doubt Walter  Andrei will accept this in the main DMD trunk.


Do you have an idea as the reason? To specific/insignificant an issue to
justify a compiler flag? They don't like new contributors?

I'll wait for a definite yes or no from one of them before I approach this.


It's a flag that changes the behavior of the generated output. That's a 
no no.


Re: floats default to NaN... why?

2012-04-15 Thread F i L

On Monday, 16 April 2012 at 04:05:35 UTC, Ary Manzana wrote:

On 4/16/12 12:00 PM, F i L wrote:

On Monday, 16 April 2012 at 03:25:15 UTC, bearophile wrote:

F i L:

I should be able to tackle something like adding a compiler 
flag to
default FP variables to zero. If I write the code, would 
anyone

object to having a flag for this?


I strongly doubt Walter  Andrei will accept this in the main 
DMD trunk.


Do you have an idea as the reason? To specific/insignificant 
an issue to

justify a compiler flag? They don't like new contributors?

I'll wait for a definite yes or no from one of them before I 
approach this.


It's a flag that changes the behavior of the generated output. 
That's a no no.


Don't *all* flags technically change behavior? -m64 for instance. 
How is this any different?


Besides, NaN as default is debugging feature. It's not the same 
thing as -debug/-release, but I think it makes sense to be able 
to disable it.


Re: floats default to NaN... why?

2012-04-14 Thread Ali Çehreli

On 04/13/2012 09:00 PM, F i L wrote:

 default is NaN

Just to complete the picture, character types have invalid initial 
values as well: 0xFF, 0x, and 0x for char, wchar, and dchar, 
respectively.


Ali



Re: floats default to NaN... why?

2012-04-14 Thread Jonathan M Davis
On Friday, April 13, 2012 23:29:40 Ali Çehreli wrote:
 On 04/13/2012 09:00 PM, F i L wrote:
   default is NaN
 
 Just to complete the picture, character types have invalid initial
 values as well: 0xFF, 0x, and 0x for char, wchar, and dchar,
 respectively.

Yeah. Thanks for mentioning those. I keep forgetting to list them whenever 
default values get discussed...

- Jonathan M Davis


Re: floats default to NaN... why?

2012-04-14 Thread Manfred Nowak
F i L wrote:

 It sounds like circular reasoning.

Several considerations pressed the design into the current form:
1) always changing output on unchanged input is hard to debug
2) GC needs to be saved from garbage, that looks like pointers
3) missed explicit initializations should not create syntax errors
4) use hardware signalling to overcome some of the limitations 
impressed by 3).
5) more???

For me the only questionable point is numer three.

-manfred


Re: floats default to NaN... why?

2012-04-14 Thread F i L

Jonathan M Davis wrote:
No. You always have a bug if you don't initialize a variable to 
the value that
it's supposed to be. It doesn't matter whether it's 0, NaN, 
527.1209823, or
whatever. All having a default value that you're more likely to 
use means is
that you're less likely to have to explicitly initialize the 
variable. It has

to be initialized to the correct value regardless.


Yes, I'm in favor of default values. That's not my argument here. 
I'm saying it makes more sense to have the default values be 
_usable_ (for convenience) rather than designed to catch 
(**cause**) bugs.




And if you're in the habit
of always initializing variables and never relying on the 
defaults if you can

help it,


That seams like a very annoying habit requirement for a languages 
with a focus on modern convenience. What's more bug prone, I 
think, is forcing developers to remember that unset floats 
(specifically) will cause bugs while it's neighboring int works 
perfectly without a explicit value.



then the cases where variables weren't initialized to what they 
were

supposed to be stand out more.


Usable defaults don't need to stand out because they're usable 
and deterministic. If you want to make sure a constructor/method 
produces expected results, unittest it.. it makes more sense to 
catch errors at compile time anyways.






D's approach is to say that
it's _still_ a bug to not initialize a variable, since you 
almost always need

to initialize a variable to something _other_ than a default.


Not always, but that's besides the point. The point is that in 
the places where you do want zero values (Vector/Matrix/etc, 
Polygon structs, counters, etc..) it's better to have consistent 
expected behavior from the default value. Not some value that 
causes runtime bugs.




I don't see how it's an
issue, since you almost always need to initialize variables to 
something other
than the default, and so leaving them as the default is almost 
always a bug.


To me, it's not a huge issue, only an annoyance. However I 
wouldn't underestimate the impact of bad marketing. When people 
are trying out the language, and they read variables are 
defaulted, not garbage do you think they're going to expect ints 
and floats to work in different ways?


And it doesn't cause bugs to default value types to zero. I have 
enough experience with C# to know that it doesn't. All it does is 
make the language more consistent.



The only point of dispute that I see in general is whether it's 
better to rely
on the default or to still explicitly initialize it when you 
actually want the

default.


This sounds like an argument for C++. Explicit declaration isn't 
a guard against bugs, you'll still be hunting through code if 
something is incorrectly set.


The fact of the matter is default initialization _does_ happen in 
code, no matter how philosophically correct always explicitly 
define value might be. Unless that's enforced, it can and should 
be expected to happen. That given, it's more convenient to have 
consistent value type behavior. Float is a Value type and 
shouldn't be subjective to the security concerns of Reference 
types.



Regardless, the _entire_ reason for default-initialization in D 
revolves
around making buggy initializations deterministic and more 
detectable.


The same way string, and int are defaulted to usable values, 
float should be as well. Catching code that works with null 
pointers is one thing. Working with a value type without 
requiring the programmer have deeper knowledge about D's 
asymmetrical features is another.


If D doesn't accommodate entering Laymen, how does it expect to 
gain popularity in any major way? Efficiency puts D on the map, 
convenience is what brings the tourists.


I'm not trying to convince you that the sky is falling because I 
disagree with D's direction with floats.. just if the goal is to 
increase D's popularity, little things may turn heads away 
quicker than you think. My original post was inspired by me 
showing my D code to another C# guy earlier, and coming up with 
poor explanations as to why floats where required to be defaulted 
in my math lib. His reaction what along the lines of my first 
post.


Re: floats default to NaN... why?

2012-04-14 Thread F i L

On Saturday, 14 April 2012 at 06:43:11 UTC, Manfred Nowak wrote:

F i L wrote:


It sounds like circular reasoning.


Several considerations pressed the design into the current form:
1) always changing output on unchanged input is hard to debug
2) GC needs to be saved from garbage, that looks like pointers
3) missed explicit initializations should not create syntax 
errors

4) use hardware signalling to overcome some of the limitations
impressed by 3).
5) more???

For me the only questionable point is numer three.

-manfred


1) Good argument.
2) Don't know enough about the GC. Can't comment.
3) I agree. I just disagree with how they are expected to be used 
without explicit definition.

4) I have no idea what you just said... :)
5) profit??


Re: floats default to NaN... why?

2012-04-14 Thread F i L

On Saturday, 14 April 2012 at 06:29:40 UTC, Ali Çehreli wrote:

On 04/13/2012 09:00 PM, F i L wrote:

 default is NaN

Just to complete the picture, character types have invalid 
initial values as well: 0xFF, 0x, and 0x for char, 
wchar, and dchar, respectively.


Ali


That's interesting, but what effect does appending an invalid 
char to a valid one have? Does the resulting string end up being 
NaS (Not a String)? Cause if not, I'm not sure that's a fair 
comparison.


Re: floats default to NaN... why?

2012-04-14 Thread Jonathan M Davis
On Saturday, April 14, 2012 09:58:42 F i L wrote:
 On Saturday, 14 April 2012 at 06:29:40 UTC, Ali Çehreli wrote:
  On 04/13/2012 09:00 PM, F i L wrote:
   default is NaN
  
  Just to complete the picture, character types have invalid
  initial values as well: 0xFF, 0x, and 0x for char,
  wchar, and dchar, respectively.
  
  Ali
 
 That's interesting, but what effect does appending an invalid
 char to a valid one have? Does the resulting string end up being
 NaS (Not a String)? Cause if not, I'm not sure that's a fair
 comparison.

You can't append a char to a char. You can append them to strings, but not 
each other.

Appending an invalid char results in a string with an invalid char. It won't 
blow up on its own. But any function which attempts to decode the string 
(which includes the vast majority of functions which would be called on a 
string) would result in a UTFException. So, you'll generally know that you 
have a bad string very quickly.

- Jonathan M Davis


Re: floats default to NaN... why?

2012-04-14 Thread F i L
On Saturday, 14 April 2012 at 07:59:25 UTC, Jonathan M Davis 
wrote:

On Saturday, April 14, 2012 09:45:57 F i L wrote:

If D doesn't accommodate entering Laymen, how does it expect to
gain popularity in any major way? Efficiency puts D on the map,
convenience is what brings the tourists.


I believe that you are the first person that I have ever heard 
complain that D
tries to default to error values. I think that some have asked 
why the types
default to what they default to, but it's not something that 
people normally
complain about. I've explained the reasoning behind it. You can 
like it or

not, but as far as I can tell, this is a complete non-issue.


Well my testimonial evidence against yours, eh? :) I'm not trying 
to get under anyone's skin here. You've explained the logic 
behind the decision and I think I've made some valid points 
against that logic. You obviously don't feel like arguing the 
point further, so I'll just leave it at that.





Re: floats default to NaN... why?

2012-04-14 Thread Silveri

On Saturday, 14 April 2012 at 07:52:51 UTC, F i L wrote:

On Saturday, 14 April 2012 at 06:43:11 UTC, Manfred Nowak wrote:

F i L wrote:

4) use hardware signalling to overcome some of the limitations
impressed by 3).


4) I have no idea what you just said... :)


On Saturday, 14 April 2012 at 07:58:44 UTC, F i L wrote:
That's interesting, but what effect does appending an invalid 
char to a valid one have? Does the resulting string end up 
being NaS (Not a String)? Cause if not, I'm not sure that's a 
fair comparison.


The initialization values chosen are also determined by the 
underlying hardware implementation of the type. Signalling NANs 
(http://en.wikipedia.org/wiki/NaN#Signaling_NaN) can be used with 
floats because they are implemented by the CPU, but in the case 
of integers or strings their aren't really equivalent values.


On Saturday, 14 April 2012 at 07:45:58 UTC, F i L wrote:
My original post was inspired by me showing my D code to 
another C# guy earlier, and coming up with poor explanations as 
to why floats where required to be defaulted in my math lib. 
His reaction what along the lines of my first post.


I think the correct mindset when working in D is to think that 
all variables should be initialized and if you get incorrect 
calculations with zero values, division by zero errors or nan 
errors the most likely mistake is often that this guideline was 
not followed.




Re: floats default to NaN... why?

2012-04-14 Thread dennis luehring

Am 14.04.2012 07:48, schrieb F i L:

On Saturday, 14 April 2012 at 05:19:38 UTC, dennis luehring wrote:

 Am 14.04.2012 06:00, schrieb F i L:

  struct Foo {
int x, y;// ready for use.
float z, w;  // messes things up.
float r = 0; // almost always...
  }


 how often in your code is 0 or 0.0 the real starting point?
 i can't think of any situation except counters or something
 where 0 is a proper start - and float 0.0 is in very very few
 cases a normal start - so whats your point?


Every place that a structure property is designed to be mutated
externally. Almost all Math structures, for instance.


if a float or double is initalized with nan - all operations on them 
will result to nan - that is a very good sign for missing proper 
initialisation


what does make float default to 0.0 better - does it just feel better?


Re: floats default to NaN... why?

2012-04-14 Thread Jerome BENOIT



On 14/04/12 09:45, F i L wrote:

Jonathan M Davis wrote:

No. You always have a bug if you don't initialize a variable to the value that
it's supposed to be. It doesn't matter whether it's 0, NaN, 527.1209823, or
whatever. All having a default value that you're more likely to use means is
that you're less likely to have to explicitly initialize the variable. It has
to be initialized to the correct value regardless.


Yes, I'm in favor of default values. That's not my argument here. I'm saying it 
makes more sense to have the default values be _usable_ (for convenience) 
rather than designed to catch (**cause**) bugs.



Why would a compiler set `real' to 0.0 rather then 1.0, Pi,  ?
The more convenient default set certainly depends on the underlying mathematics,
and a compiler  cannot (yet) understand the encoded mathematics.
NaN is certainly the certainly the very choice as whatever the involved 
mathematics,
they will blow up sooner or later. And, from a practical point of view, blowing 
up
is easy to trace.





Re: floats default to NaN... why?

2012-04-14 Thread bearophile

F i L:


So basically, it's for debugging?


To avoid bugs it's useful for all variables to be initialized 
before use (maybe with an explicit annotation for the uncommon 
cases where you want to use uninitialized memory, like: 
http://research.swtch.com/sparse ). Having a variable not 
initialized is a common source of bugs.


C# solves this requiring explicit initialization all variables 
before use. Walter doesn't believe that flow analysis is flexible 
enough in a system language like D (or he doesn't want to 
implement it) so he has not used this idea in D.


So D uses a different worse strategy, it initializes the 
variables to something unusable, so if you try to use them, your 
program fails clearly.


The idea of setting the variables to something useful, like 
setting initialized chars to 'a' and floating point values to 0.0 
looks handy, but on the long run it's not an improvement. You 
sometimes don't initialize a char variable because you are fine 
for it to be 'a', and some other times you don't set it just 
because you forget to initialize it. The compiler can't tell 
apart the two cases, and this is may be a bug in your code.


In practice I think having FP variables initialized to NaN has 
not avoided me significant bugs in the last years. On the other 
hand I have ported some C code that uses global floating point 
arrays/variables to D, and I have had to remove some bugs caused 
by the assumption in the C code that those global FP variables 
are initialized to zero, that's false in D.


Another source of troubles is large fixed-sized FP global arrays, 
that inflate the binary a *lot*. You have to remember to 
initialize them to zero explicitly:


double[100_000] foo; // bad, inflates binary.
double[100_000] bar = 0.0; // good.
void main() {}

In D I'd like the solution used by C# + an annotation to not 
initialize a variable or array.


Bye,
bearophile


Re: floats default to NaN... why?

2012-04-14 Thread Andrej Mitrovic
On 4/14/12, bearophile bearophileh...@lycos.com wrote:
 Having a variable not initialized is a common source of bugs.

I'm going to argue that this was true for C/C++ but is much less true
for D. One benefit of having integrals initialized to 0 is that you
now have a defined default that you can rely on (just as you can rely
on pointers being null by default). Personally I find this to be a big
win in productivity. If the language/runtime does something defined
for me then I can focus on more important things.

But NaN might be a good thing for all I know.. I rarely use
floating-point so I'll stay out of that, but I'd like to have this
fixed: http://d.puremagic.com/issues/show_bug.cgi?id=6303


Re: floats default to NaN... why?

2012-04-14 Thread F i L

Jerome BENOIT wrote:
Why would a compiler set `real' to 0.0 rather then 1.0, Pi, 
 ?


Because 0.0 is the lowest (smallest, starting point, etc..) 
numerical value. Pi is the corner case and obviously has to be 
explicitly set.


If you want to take this further, chars could even be initialized 
to spaces or newlines or something similar. Pointers/references 
need to be defaulted to null because they absolutely must equal 
an explicit value before use. Value types don't share this 
limitation.


The more convenient default set certainly depends on the 
underlying mathematics,

and a compiler  cannot (yet) understand the encoded mathematics.
NaN is certainly the certainly the very choice as whatever the 
involved mathematics,
they will blow up sooner or later. And, from a practical point 
of view, blowing up is easy to trace.


Zero is just as easy for the runtime/compiler to default to; and 
bugs can be introduce anywhere in the code, not just definition. 
We have good ways of catching these bugs in D with unittests 
already.


Re: floats default to NaN... why?

2012-04-14 Thread F i L

On Saturday, 14 April 2012 at 10:38:45 UTC, Silveri wrote:

On Saturday, 14 April 2012 at 07:52:51 UTC, F i L wrote:
On Saturday, 14 April 2012 at 06:43:11 UTC, Manfred Nowak 
wrote:

F i L wrote:

4) use hardware signalling to overcome some of the limitations
impressed by 3).


4) I have no idea what you just said... :)


On Saturday, 14 April 2012 at 07:58:44 UTC, F i L wrote:
That's interesting, but what effect does appending an invalid 
char to a valid one have? Does the resulting string end up 
being NaS (Not a String)? Cause if not, I'm not sure that's 
a fair comparison.


The initialization values chosen are also determined by the 
underlying hardware implementation of the type. Signalling NANs 
(http://en.wikipedia.org/wiki/NaN#Signaling_NaN) can be used 
with floats because they are implemented by the CPU, but in the 
case of integers or strings their aren't really equivalent 
values.


I'm sure the hardware can just as easily signal zeros.



On Saturday, 14 April 2012 at 07:45:58 UTC, F i L wrote:
My original post was inspired by me showing my D code to 
another C# guy earlier, and coming up with poor explanations 
as to why floats where required to be defaulted in my math 
lib. His reaction what along the lines of my first post.


I think the correct mindset when working in D is to think that 
all variables should be initialized and if you get incorrect 
calculations with zero values, division by zero errors or nan 
errors the most likely mistake is often that this guideline was 
not followed.


Like I said before, this is backwards thinking. At the end of the 
day, you _can_ use default values in D. Given that ints are 
defaulted to usable values, FP Values should be as well for the 
sake of consistency and convenience.


You can't force new D programmers to follow a 'guidline' no 
matter how loudly the documentation shouts it (which is barely 
does at this point), so said guideline is not a dependable 
practice all D will follow (unless it's statically enforced)... 
nor _should_ the learning curve be steepened by enforcing 
awareness of this idiosyncrasy.


The correct mindset from the compilers perspective should be: 
people create variables to use them. What do they want if they 
didn't specify a value?


therefor our mindset can be: I defined a variable to use. Should 
be zero so I don't need to set it.


Re: floats default to NaN... why?

2012-04-14 Thread F i L

dennis luehring wrote:
what does make float default to 0.0 better - does it just feel 
better?


Not just. It's consistent with Int types, therefor easier for 
newbs to pick up since all numeric value types behave the same.


I even think char should default to a usable value as well. Most 
likely a space character. But that's another point.




Re: floats default to NaN... why?

2012-04-14 Thread F i L

On Saturday, 14 April 2012 at 12:48:01 UTC, Andrej Mitrovic wrote:

On 4/14/12, bearophile bearophileh...@lycos.com wrote:

Having a variable not initialized is a common source of bugs.


I'm going to argue that this was true for C/C++ but is much 
less true
for D. One benefit of having integrals initialized to 0 is that 
you
now have a defined default that you can rely on (just as you 
can rely
on pointers being null by default). Personally I find this to 
be a big
win in productivity. If the language/runtime does something 
defined

for me then I can focus on more important things.


Amen! This is exactly what I'm trying to get at. The compiler 
provides defaults as a convince feature (mostly) so that there's 
no garbage and so values are reliable. It's incredibly 
inconvenient at that point to have to remember to always 
explicitly init one specific type..


This feels very natural in C#.


Re: floats default to NaN... why?

2012-04-14 Thread Andrej Mitrovic
On 4/14/12, F i L witte2...@gmail.com wrote:
 This is exactly what I'm trying to get at.

Anyway it's not all bad news since we can use a workaround:

struct Float {
float payload = 0.0;
alias payload this;
}

void main() {
Float x;  // acts as a float, is initialized to 0.0
}

Not pretty, but it comes in handy.


Re: floats default to NaN... why?

2012-04-14 Thread Jerome BENOIT



On 14/04/12 16:47, F i L wrote:

Jerome BENOIT wrote:

Why would a compiler set `real' to 0.0 rather then 1.0, Pi,  ?


Because 0.0 is the lowest (smallest, starting point, etc..)


quid -infinity ?

numerical value. Pi is the corner case and obviously has to be explicitly set.


If you want to take this further, chars could even be initialized to spaces or 
newlines or something similar. Pointers/references need to be defaulted to null 
because they absolutely must equal an explicit value before use. Value types 
don't share this limitation.



CHAR set are bounded, `real' are not.



The more convenient default set certainly depends on the underlying mathematics,
and a compiler cannot (yet) understand the encoded mathematics.
NaN is certainly the certainly the very choice as whatever the involved 
mathematics,
they will blow up sooner or later. And, from a practical point of view, blowing 
up is easy to trace.


Zero is just as easy for the runtime/compiler to default to;


Fortran age is over.
D compiler contains a lot of features that are not easy to set up by the 
compiler BUT meant for easing coding.


 and bugs can be introduce anywhere in the code, not just definition.

so the NaN approach discard one source of error.

 We have good ways of catching these bugs in D with unittests already.

Zero may give a value that sounds reasonable, NaN will give a NaN value ... 
which is not reasonable: this is NaN blow up and it can been seen right the way.

Note that the NaN approach is used in some numerical libraries as CLN.



Re: floats default to NaN... why?

2012-04-14 Thread F i L

On Saturday, 14 April 2012 at 15:44:46 UTC, Jerome BENOIT wrote:



On 14/04/12 16:47, F i L wrote:

Jerome BENOIT wrote:
Why would a compiler set `real' to 0.0 rather then 1.0, Pi, 
 ?


Because 0.0 is the lowest (smallest, starting point, etc..)


quid -infinity ?


The concept of zero is less meaningful than -infinity. Zero is 
the logical starting place because zero represents nothing 
(mathematically), which is inline with how pointers behave (only 
applicable to memory, not scale).



numerical value. Pi is the corner case and obviously has to be 
explicitly set.


If you want to take this further, chars could even be 
initialized to spaces or newlines or something similar. 
Pointers/references need to be defaulted to null because they 
absolutely must equal an explicit value before use. Value 
types don't share this limitation.




CHAR set are bounded, `real' are not.


Good point, I'm not so convinced char should default to  . I 
think there are arguments either way, I haven't given it much 
thought.



The more convenient default set certainly depends on the 
underlying mathematics,
and a compiler cannot (yet) understand the encoded 
mathematics.
NaN is certainly the certainly the very choice as whatever 
the involved mathematics,
they will blow up sooner or later. And, from a practical 
point of view, blowing up is easy to trace.


Zero is just as easy for the runtime/compiler to default to;


Fortran age is over.
D compiler contains a lot of features that are not easy to set 
up by the compiler BUT meant for easing coding.



 and bugs can be introduce anywhere in the code, not just 
definition.


so the NaN approach discard one source of error.


Sure, the question then becomes does catching bugs introduced by 
inaccurately defining a variable outweigh the price of 
inconsistency and learning curve. My opinion is No, expected 
behavior is more important. Especially when I'm not sure I've 
ever heard of someone in C# having bugs that would have been 
helped by defaulting to NaN. I mean really, how does:


float x; // NaN
...
x = incorrectValue;
...
foo(x); // first time x is used

differ from:

float x = incorrectValue;
...
foo(x);

in any meaning full way? Except that in this one case:

float x; // NaN
...
foo(x); // uses x, resulting in NaNs
...
x = foo(x); // sets after first time x is used

you'll get a more meaningful error message, which, assuming you 
didn't just write a ton of FP code, you'd be able to trace to 
it's source faster.


It just isn't enough to justify defaulting to NaN, IMO. I even 
think the process of hunting down bugs is more straight forward 
when defaulting to zero, because every numerical bug is pursued 
the same way, regardless of type. You don't have to remember that 
FP specifically causes this issues in only some cases.


Re: floats default to NaN... why?

2012-04-14 Thread F i L

On Saturday, 14 April 2012 at 15:35:13 UTC, Andrej Mitrovic wrote:

On 4/14/12, F i L witte2...@gmail.com wrote:

This is exactly what I'm trying to get at.


Anyway it's not all bad news since we can use a workaround:

struct Float {
float payload = 0.0;
alias payload this;
}

void main() {
Float x;  // acts as a float, is initialized to 0.0
}

Not pretty, but it comes in handy.


Lol, that's kinda an interesting idea:

struct var(T, T def) {
T payload = def;
alias payload this;
}

alias var!(float,  0.0f) Float;
alias var!(double, 0.0)  Double;
alias var!(real,   0.0)  Real;
alias var!(char,   ' ')  Char;

void main() {
Float f;
assert(f == 0.0f);
}

a Hack though, since it doesn't work with 'auto'. I still think 
it should be the other way around, and this should be used to 
Default to NaN.


Re: floats default to NaN... why?

2012-04-14 Thread Andrej Mitrovic
On 4/14/12, F i L witte2...@gmail.com wrote:
 a Hack though, since it doesn't work with 'auto'.

What do you mean?


Re: floats default to NaN... why?

2012-04-14 Thread F i L

On Saturday, 14 April 2012 at 17:30:19 UTC, Andrej Mitrovic wrote:

On 4/14/12, F i L witte2...@gmail.com wrote:

a Hack though, since it doesn't work with 'auto'.


What do you mean?


Only that:

auto f = 1.0f; // is float not Float


Re: floats default to NaN... why?

2012-04-14 Thread Andrej Mitrovic
On 4/14/12, F i L witte2...@gmail.com wrote:
  auto f = 1.0f; // is float not Float

UFCS in 2.059 to the rescue:

struct Float
{
float payload = 0.0;
alias payload this;
}

@property Float f(float val) { return Float(val); }

void main()
{
auto f = 1.0.f;
}


Re: floats default to NaN... why?

2012-04-14 Thread Jerome BENOIT



On 14/04/12 18:38, F i L wrote:

On Saturday, 14 April 2012 at 15:44:46 UTC, Jerome BENOIT wrote:



On 14/04/12 16:47, F i L wrote:

Jerome BENOIT wrote:

Why would a compiler set `real' to 0.0 rather then 1.0, Pi,  ?


Because 0.0 is the lowest (smallest, starting point, etc..)


quid -infinity ?


The concept of zero is less meaningful than -infinity. Zero is the logical 
starting place because zero represents nothing (mathematically)


zero is not nothing in mathematics, on the contrary !

0 + x = 0 // neutral for addition
0 * x = 0 // absorbing for multiplication
0 / x = 0 if  (x  0) // idem
| x / 0 | = infinity if (x  0)
0 / 0 = NaN // undefined


, which is inline with how pointers behave (only applicable to memory, not 
scale).

pointer value are also bounded.





numerical value. Pi is the corner case and obviously has to be explicitly set.


If you want to take this further, chars could even be initialized to spaces or 
newlines or something similar. Pointers/references need to be defaulted to null 
because they absolutely must equal an explicit value before use. Value types 
don't share this limitation.



CHAR set are bounded, `real' are not.


Good point, I'm not so convinced char should default to  . I think there are 
arguments either way, I haven't given it much thought.



The more convenient default set certainly depends on the underlying mathematics,
and a compiler cannot (yet) understand the encoded mathematics.
NaN is certainly the certainly the very choice as whatever the involved 
mathematics,
they will blow up sooner or later. And, from a practical point of view, blowing 
up is easy to trace.


Zero is just as easy for the runtime/compiler to default to;


Fortran age is over.
D compiler contains a lot of features that are not easy to set up by the 
compiler BUT meant for easing coding.


and bugs can be introduce anywhere in the code, not just definition.

so the NaN approach discard one source of error.


Sure, the question then becomes does catching bugs introduced by inaccurately 
defining a variable outweigh the price of inconsistency and learning curve.  My 
opinion is No, expected behavior is more important.


From a numerical point of view, zero is not a good default (see above), and as 
such setting 0.0 as default for real is not an expected behaviour.
Considering the NaN blow up behaviour, for a numerical folk the expected 
behaviour is certainly setting NaN as default for real.
Real number are not meant here for coders, but for numerical folks: D applies 
here a rule gain along experiences from numerical people.
So your opinion is good, but you misplaced the inconsistency: 0 is inaccurate 
here and NaN is accurate, not the contrary.

 Especially when I'm not sure I've ever heard of someone in C# having bugs that 
would have been helped by defaulting to NaN. I mean really, how does:


float x; // NaN
...
x = incorrectValue;
...
foo(x); // first time x is used

differ from:

float x = incorrectValue;
...
foo(x);

in any meaning full way? Except that in this one case:

float x; // NaN
...
foo(x); // uses x, resulting in NaNs
...
x = foo(x); // sets after first time x is used

you'll get a more meaningful error message, which, assuming you didn't just 
write a ton of FP code, you'd be able to trace to it's source faster.

It just isn't enough to justify defaulting to NaN, IMO. I even think the 
process of hunting down bugs is more straight forward when defaulting to zero, 
because every numerical bug is pursued the same way, regardless of type. You 
don't have to remember that FP specifically causes this issues in only some 
cases.


For numerical works, because 0 behaves nicely most of the time, non properly 
initialized variables may not detected because the output data can sound 
resoneable;
on the other hand, because NaN blows up, such detection is straight forward: 
the output will be a NaN output which will jump to your face very quickly.
This is a numerical issue, not a coding language issue. Personally in my C 
code, I have taken the habit to initialise real numbers (doubles) with NaN:
in the GSL library there is a ready to use macro: GSL_NAN. (Concerning, 
integers I used extreme value as INT_MIN, INT_MAX, SIZE_MAX. ...).
I would even say that D may go further by setting a kind of NaN for integers 
(and for chars).


Re: floats default to NaN... why?

2012-04-14 Thread F i L

On Saturday, 14 April 2012 at 18:02:57 UTC, Andrej Mitrovic wrote:

On 4/14/12, F i L witte2...@gmail.com wrote:

 auto f = 1.0f; // is float not Float


UFCS in 2.059 to the rescue:

struct Float
{
float payload = 0.0;
alias payload this;
}

@property Float f(float val) { return Float(val); }

void main()
{
auto f = 1.0.f;
}


You're a scholar and and gentlemen!


Re: floats default to NaN... why?

2012-04-14 Thread F i L

On Saturday, 14 April 2012 at 18:07:41 UTC, Jerome BENOIT wrote:

On 14/04/12 18:38, F i L wrote:
On Saturday, 14 April 2012 at 15:44:46 UTC, Jerome BENOIT 
wrote:

On 14/04/12 16:47, F i L wrote:

Jerome BENOIT wrote:
Why would a compiler set `real' to 0.0 rather then 1.0, Pi, 
 ?


Because 0.0 is the lowest (smallest, starting point, etc..)


quid -infinity ?


The concept of zero is less meaningful than -infinity. Zero is 
the logical starting place because zero represents nothing 
(mathematically)


zero is not nothing in mathematics, on the contrary !

0 + x = 0 // neutral for addition
0 * x = 0 // absorbing for multiplication
0 / x = 0 if  (x  0) // idem
| x / 0 | = infinity if (x  0)


Just because mathematical equations behave differently with zero 
doesn't change the fact that zero _conceptually_ represents 
nothing


It's default for practical reason. Not for mathematics sake, but 
for the sake of convenience. We don't all study higher 
mathematics but we're all taught to count since we where 
toddlers. Zero makes sense as the default, and is compounded by 
the fact that Int *must* be zero.



0 / 0 = NaN // undefined


Great! Yet another reason to default to zero. That way, 0 / 0 
bugs have a very distinct fingerprint.



, which is inline with how pointers behave (only applicable to 
memory, not scale).


pointer value are also bounded.


I don't see how that's relevant.


Considering the NaN blow up behaviour, for a numerical folk the 
expected behaviour is certainly setting NaN as default for real.
Real number are not meant here for coders, but for numerical 
folks:


Of course FP numbers are meant for coders... they're in a 
programming language. They are used by coders, and not every 
coder that uses FP math *has* to be well trained in the finer 
points of mathematics simply to use a number that can represent 
fractions in a conceptually practical way.



D applies here a rule gain along experiences from numerical 
people.


I'm sorry I can't hear you over the sound of how popular Java and 
C# are. Convenience is about productivity, and that's largely 
influence by how much prior knowledge someone needs before being 
able to understand a features behavior.


(ps. if you're going to use Argumentum ad Verecundiam, I get to 
use Argumentum ad Populum).



For numerical works, because 0 behaves nicely most of the time, 
non properly initialized variables may not detected because the 
output data can sound resoneable;
on the other hand, because NaN blows up, such detection is 
straight forward: the output will be a NaN output which will 
jump to your face very quickly.


I gave examples which address this. This behavior is only 
[debatably] beneficial in corner cases on FP numbers 
specifically. I don't think that sufficient justification in 
light of reasons I give above.




This is a numerical issue, not a coding language issue.


No, it's both. We're not Theoretical physicists we're Software 
Engineers writing a very broad scope of different programs.



Personally in my C code, I have taken the habit to initialise 
real numbers (doubles) with NaN:
in the GSL library there is a ready to use macro: GSL_NAN. 
(Concerning, integers I used extreme value as INT_MIN, INT_MAX, 
SIZE_MAX. ...).


Only useful because C defaults to garbage.


I would even say that D may go further by setting a kind of NaN 
for integers (and for chars).


You may get your with if Arm64 takes over.


Re: floats default to NaN... why?

2012-04-14 Thread Andrej Mitrovic
On 4/14/12, Jerome BENOIT g62993...@rezozer.net wrote:
 I would even say that D may go further by setting a kind of NaN for integers.

That's never going to happen.


Re: floats default to NaN... why?

2012-04-14 Thread Manfred Nowak
F i L wrote:

 You can't force new D programmers to follow a 'guidline'

By exposing a syntax error for every missed explicit initialization the 
current guideline would be changed into an insurmountable barrier, 
forcing every new D programmers to follow the 'guidline'.

-manfred


Re: floats default to NaN... why?

2012-04-14 Thread Joseph Rushton Wakeling

On 14/04/12 16:52, F i L wrote:

The initialization values chosen are also determined by the underlying
hardware implementation of the type. Signalling NANs
(http://en.wikipedia.org/wiki/NaN#Signaling_NaN) can be used with floats
because they are implemented by the CPU, but in the case of integers or
strings their aren't really equivalent values.


I'm sure the hardware can just as easily signal zeros.


The point is not that the hardware can't deal with floats initialized to zero. 
The point is that the hardware CAN'T support an integer equivalent of NaN.  If 
it did, D would surely use it.



Like I said before, this is backwards thinking. At the end of the day, you
_can_ use default values in D. Given that ints are defaulted to usable values,
FP Values should be as well for the sake of consistency and convenience.


Speaking as a new user (well, -ish), my understanding of D is that its design 
philosophy is that _the easy thing to do should be the safe thing to do_, and 
this concept is pervasive throughout the design of the whole language.


So, ideally (as bearophile says) you'd compel the programmer to explicitly 
initialize variables before using them, or explicitly specify that they are not 
being initialized deliberately.  Enforcing that may be tricky (most likely not 
impossible, but tricky, and there are bigger problems to solve for now), so the 
next best thing is to default-initialize variables to something that will scream 
at you THIS IS WRONG!! when the program runs, and so force you to correct the 
error.


For floats, that means NaN.  For ints, the best thing you can do is zero.  It's 
a consistent decision -- not consistent as you frame it, but consistent with the 
language design philosophy.



You can't force new D programmers to follow a 'guidline' no matter how loudly 
the documentation shouts it


No, but you can drop very strong hints as to good practice.  Relying on default 
values for variables is bad programming.  The fact that it is possible with 
integers is a design fault forced on the language by hardware constraints.  As a 
language designer, do you compound the fault by making floats also init to 0 or 
do you enforce good practice in a way which will probably make the user 
reconsider any assumptions they may have made for ints?


Novice programmers need support, but support should not extend to pandering to 
bad habits which they would be better off unlearning (or never learning in the 
first place).


Re: floats default to NaN... why?

2012-04-14 Thread Jerome BENOIT



On 14/04/12 20:51, F i L wrote:

On Saturday, 14 April 2012 at 18:07:41 UTC, Jerome BENOIT wrote:

On 14/04/12 18:38, F i L wrote:

On Saturday, 14 April 2012 at 15:44:46 UTC, Jerome BENOIT wrote:

On 14/04/12 16:47, F i L wrote:

Jerome BENOIT wrote:

Why would a compiler set `real' to 0.0 rather then 1.0, Pi,  ?


Because 0.0 is the lowest (smallest, starting point, etc..)


quid -infinity ?


The concept of zero is less meaningful than -infinity. Zero is the logical 
starting place because zero represents nothing (mathematically)




zero is not nothing in mathematics, on the contrary !

0 + x = 0 // neutral for addition
0 * x = 0 // absorbing for multiplication
0 / x = 0 if (x  0) // idem
| x / 0 | = infinity if (x  0)


Just because mathematical equations behave differently with zero doesn't change the fact 
that zero _conceptually_ represents nothing


You are totally wrong: here we are dealing with key concept of the group theory.



It's default for practical reason. Not for mathematics sake, but for the sake 
of convenience. We don't all study higher mathematics  but we're all taught to 
count since we where toddlers. Zero makes sense as the default, and is 
compounded by the fact that Int *must* be zero.



The sake of convenience here is numerical practice, not coding practice: this 
is the point:
from numerical folks, zero is a very bad choice; NaN is a very good one.



0 / 0 = NaN // undefined


Great! Yet another reason to default to zero. That way, 0 / 0 bugs have a 
very distinct fingerprint.


While the other (which are by far more likely) are bypassed: here you are 
making a point against yourself:

NaN + x = NaN
NaN * x = NaN
x / NaN = NaN
NaN / x = NaN





, which is inline with how pointers behave (only applicable to memory, not 
scale).

pointer value are also bounded.


I don't see how that's relevant.


Because then zero is a meaningful default for pointers.





Considering the NaN blow up behaviour, for a numerical folk the expected 
behaviour is certainly setting NaN as default for real.
Real number are not meant here for coders, but for numerical folks:


Of course FP numbers are meant for coders... they're in a programming language. 
They are used by coders, and not every coder that uses FP math *has* to be well 
trained in the finer points of mathematics simply to use a number that can 
represent fractions in a conceptually practical way.


The above is not finer points, but basic ones.
Otherwise, float and double are rather integers than by fractions.




D applies here a rule gain along experiences from numerical people.


I'm sorry I can't hear you over the sound of how popular Java and C# are.


Sorry, I can't hear you over the sound of mathematics.

 Convenience is about productivity, and that's largely influence by how much 
prior knowledge someone needs before being able to understand a features 
behavior.

Floating point calculus basics are easy to understand.



(ps. if you're going to use Argumentum ad Verecundiam, I get to use Argumentum 
ad Populum).


So forget coding !





For numerical works, because 0 behaves nicely most of the time, non properly 
initialized variables may not detected because the output data can sound 
resoneable;
on the other hand, because NaN blows up, such detection is straight forward: 
the output will be a NaN output which will jump to your face very quickly.


I gave examples which address this. This behavior is only [debatably] 
beneficial in corner cases on FP numbers specifically. I don't think that 
sufficient justification in light of reasons I give above.


This is more than sufficient because the authority for floating point (aka 
numerical) stuff is hold by numerical folks.





This is a numerical issue, not a coding language issue.


No, it's both.


So a choice has to be done: the mature choice is NaN approach.


 We're not Theoretical physicists

I am

 we're Software Engineers writing a very broad scope of different programs.

Does floating point calculation belong to the broad scope ?
Do engineers relay on numerical mathematician skills  when they code numerical 
stuff, or on pre-calculus books for grocers ?





Personally in my C code, I have taken the habit to initialise real numbers 
(doubles) with NaN:
in the GSL library there is a ready to use macro: GSL_NAN. (Concerning, 
integers I used extreme value as INT_MIN, INT_MAX, SIZE_MAX. ...).


Only useful because C defaults to garbage.


It can be initialized by 0.0 as well.




I would even say that D may go further by setting a kind of NaN for integers 
(and for chars).


You may get your with if Arm64 takes over.


floats default to NaN... why?

2012-04-13 Thread F i L

From the FaQ:

NaNs have the interesting property in that whenever a NaN is 
used as an operand in a computation, the result is a NaN. 
Therefore, NaNs will propagate and appear in the output 
whenever a computation made use of one. This implies that a NaN 
appearing in the output is an unambiguous indication of the use 
of an uninitialized variable.


If 0.0 was used as the default initializer for floating point 
values, its effect could easily be unnoticed in the output, and 
so if the default initializer was unintended, the bug may go 
unrecognized.



So basically, it's for debugging? Is that it's only reason? If so 
I'm at loss as to why default is NaN. The priority should always 
be in the ease-of-use IMO. Especially when it breaks a standard:


struct Foo {
  int x, y;// ready for use.
  float z, w;  // messes things up.
  float r = 0; // almost always...
}

I'm putting this in .Learn because I'm not really suggesting a 
change as much as trying to learn the reasoning behind it. The 
break in consistency doesn't outweigh any cost of debugging 
benefit I can see. I'm not convinced there is any. Having core 
numerical types always and unanimously default to zero is 
understandable and consistent (and what I'm use too with C#). The 
above could be written as:


struct Foo {
  float z = float.nan, ...
}

if you wanted to guarantee the values are set uniquely at 
construction. Which seems like a job better suited for unittests 
to me anyways.


musing...


Re: floats default to NaN... why?

2012-04-13 Thread Jonathan M Davis
On Saturday, April 14, 2012 06:00:35 F i L wrote:
  From the FaQ:
  NaNs have the interesting property in that whenever a NaN is
  used as an operand in a computation, the result is a NaN.
  Therefore, NaNs will propagate and appear in the output
  whenever a computation made use of one. This implies that a NaN
  appearing in the output is an unambiguous indication of the use
  of an uninitialized variable.
  
  If 0.0 was used as the default initializer for floating point
  values, its effect could easily be unnoticed in the output, and
  so if the default initializer was unintended, the bug may go
  unrecognized.
 
 So basically, it's for debugging? Is that it's only reason? If so
 I'm at loss as to why default is NaN. The priority should always
 be in the ease-of-use IMO. Especially when it breaks a standard:
 
  struct Foo {
int x, y;// ready for use.
float z, w;  // messes things up.
float r = 0; // almost always...
  }
 
 I'm putting this in .Learn because I'm not really suggesting a
 change as much as trying to learn the reasoning behind it. The
 break in consistency doesn't outweigh any cost of debugging
 benefit I can see. I'm not convinced there is any. Having core
 numerical types always and unanimously default to zero is
 understandable and consistent (and what I'm use too with C#). The
 above could be written as:
 
  struct Foo {
float z = float.nan, ...
  }
 
 if you wanted to guarantee the values are set uniquely at
 construction. Which seems like a job better suited for unittests
 to me anyways.
 
 musing...

Types default to the closest thing that they have to an invalid value so that 
code blows up as soon as possible if you fail to initialize a variable to a 
proper value and so that it fails deterministically (unlike when variables 
aren't initialized and therefore have garbage values).

NaN is the invalid value for floating point types and works fantastically at 
indicating that you screwed up and failed to initialize or assign your 
variable a proper value. null for pointers and references works similarily 
well.

If anything, the integral types and bool fail, because they don't _have_ 
invalid values. The closest that they have is 0 and false respectively, so 
that's what they get. It's the integral types that are inconsistent, not the 
floating point types.

It was never really intended that variables would be default initialized with 
values that you would use. You're supposed to initialize them or assign them 
to appropriate values before using them. Now, since the default values are 
well-known and well-defined, you can rely on them if you actually _want_ those 
values, but the whole purpose of default initialization is to make code fail 
deterministically when variables aren't properly initialized - and to fail as 
quickly as possible.

- Jonathan M Davis


Re: floats default to NaN... why?

2012-04-13 Thread dennis luehring

Am 14.04.2012 06:00, schrieb F i L:

  struct Foo {
int x, y;// ready for use.
float z, w;  // messes things up.
float r = 0; // almost always...
  }


how often in your code is 0 or 0.0 the real starting point?
i can't think of any situation except counters or something
where 0 is a proper start - and float 0.0 is in very very few cases a 
normal start - so whats your point?


Re: floats default to NaN... why?

2012-04-13 Thread F i L
So it's what I thought, the only reason is based on a faulty 
premise, IMO.


Jonathan M Davis wrote:
Types default to the closest thing that they have to an invalid 
value so that
code blows up as soon as possible if you fail to initialize a 
variable to a

proper value and so that it fails deterministically


This seems like exactly opposite behavior I'd expect from the 
compiler. Modern convenience means more bang per character, and 
initializing values to garbage is the corner case, not the usual 
case.



(unlike when variables
aren't initialized and therefore have garbage values).


This is the faulty premise I see. Garbage values are a C/C++ 
thing. They must be forced in D, eg, float x = void.


I would argue that because values *should* have implicit, 
non-garbage, default values that those default values should be 
the most commonly used/expected lowest value. Especially since 
ints _must_ be 0 (though I hear this is changing in Arm64).



NaN is the invalid value for floating point types and works 
fantastically at
indicating that you screwed up and failed to initialize or 
assign your

variable a proper value.



null for pointers and references works similarily
well.


Not exactly. NaNs don't cause Segfaults or Undefined behavior, 
they just make the math go haywire. It's like it was designed to 
be inconvenient. The argument looks like this to me:


We default values so there's no garbage-value bugs.. but the 
default is something that will cause a bug.. because values 
should be explicitly defaulted so they're not unexpected values 
(garbage).. even though we could default them to an expected 
value since we're doing it to begin with


It sounds like circular reasoning.


It was never really intended that variables would be default 
initialized with

values that you would use.


why exactly? again, this is a faulty premise IMO.



You're supposed to initialize them or assign them
to appropriate values before using them.


sure, but if they always default to _usable_ constants no 
expectations are lost and no bugs are created.




Now, since the default values are
well-known and well-defined, you can rely on them if you 
actually _want_ those

values,


yes, and how often do you _want_ a NaN in the mix? You can rely 
on usable values just as much. Even more so since Ints and Floats 
would be consistent.



but the whole purpose of default initialization is to make code 
fail
deterministically when variables aren't properly initialized - 
and to fail as

quickly as possible.


that only makes sense in C/C++ where value are implicitly garbage 
and mess things up.



Again, this is only my perspective. I would love to hear 
convincing arguments to how great D currently defaulting to NaN 
is, and how much headache (I never knew I had) it will save me... 
but I just don't see it. In fact I'm now more convinced of the 
opposite. Never in C# have I ran into issues with unexpected 
values from default initializers. Most important values are set 
at runtime through object constructors; not at declaration.


Re: floats default to NaN... why?

2012-04-13 Thread F i L

On Saturday, 14 April 2012 at 05:19:38 UTC, dennis luehring wrote:

Am 14.04.2012 06:00, schrieb F i L:

 struct Foo {
   int x, y;// ready for use.
   float z, w;  // messes things up.
   float r = 0; // almost always...
 }


how often in your code is 0 or 0.0 the real starting point?
i can't think of any situation except counters or something
where 0 is a proper start - and float 0.0 is in very very few 
cases a normal start - so whats your point?


Every place that a structure property is designed to be mutated 
externally. Almost all Math structures, for instance.


Defaults are to combat garbage values, but debugging cases where 
values where accidentally unset (most likely missed during 
construction) seems like a better job for a unittest.




Re: floats default to NaN... why?

2012-04-13 Thread Jonathan M Davis
On Saturday, April 14, 2012 07:41:33 F i L wrote:
  You're supposed to initialize them or assign them
  to appropriate values before using them.
 
 sure, but if they always default to _usable_ constants no
 expectations are lost and no bugs are created.

No. You always have a bug if you don't initialize a variable to the value that 
it's supposed to be. It doesn't matter whether it's 0, NaN, 527.1209823, or 
whatever. All having a default value that you're more likely to use means is 
that you're less likely to have to explicitly initialize the variable. It has 
to be initialized to the correct value regardless. And if you're in the habit 
of always initializing variables and never relying on the defaults if you can 
help it, then the cases where variables weren't initialized to what they were 
supposed to be stand out more.

  but the whole purpose of default initialization is to make code
  fail
  deterministically when variables aren't properly initialized -
  and to fail as
  quickly as possible.
 
 that only makes sense in C/C++ where value are implicitly garbage
 and mess things up.

??? D was designed with an eye to improve on C/C++. In C/C++, variables aren't 
guaranteed to be initialized, so if you forget to initialize them, you get 
garbage, which is not only buggy, it results in non-deterministic behavior. 
It's always a bug to not initialize a variable. D's approach is to say that 
it's _still_ a bug to not initialize a variable, since you almost always need 
to initialize a variable to something _other_ than a default. But rather than 
leaving them as garbage, D makes it so that variables are default-initialized, 
making the buggy behavior deterministic. And since not initializing a variable 
is almost always a bug, default values which were the closest to error values 
for each type were chosen.

You can disagree with the logic, but there it is. I don't see how it's an 
issue, since you almost always need to initialize variables to something other 
than the default, and so leaving them as the default is almost always a bug.

The only point of dispute that I see in general is whether it's better to rely 
on the default or to still explicitly initialize it when you actually want the 
default. Relying on the default works, but by always explicitly initializing 
variables, those which are supposed to be initialized to something other than 
the defaults but aren't are then much more obvious.

Regardless, the _entire_ reason for default-initialization in D revolves 
around making buggy initializations deterministic and more detectable.

- Jonathan M Davis