Re: Github names & avatars

2016-05-13 Thread Ali Çehreli via Digitalmars-d

On 05/13/2016 03:18 PM, Walter Bright wrote:


Ironically, hiding contributions under a pseudonym may make one a less
desirable candidate because nobody will know that you're any good.


This.


Being a well-known contributor to a prestigious
project is a shortcut to better things.


Yep.

Ali



Re: Always false float comparisons

2016-05-13 Thread jmh530 via Digitalmars-d

On Saturday, 14 May 2016 at 01:26:18 UTC, Walter Bright wrote:


An anecdote: a colleague of mine was once doing a chained 
calculation. At every step, he rounded to 2 digits of precision 
after the decimal point, because 2 digits of precision was 
enough for anybody. I carried out the same calculation to the 
max precision of the calculator (10 digits). He simply could 
not understand why his result was off by a factor of 2, which 
was a couple hundred times his individual roundoff error.





I'm sympathetic to this. Some of my work deals with statistics 
and you see people try to use formula that are faster but less 
accurate and it can really get you in to trouble. Var(X) = E(X^2) 
- E(X)^2 is only true for real numbers, not floating point 
arithmetic. It can also lead to weird results when dealing with 
matrix inverses.


I like the idea of a float type that is effectively the largest 
precision on your machine (the D real type). However, I could be 
convinced by the argument that you should have to opt-in for this 
and that internal calculations should not implicitly use it. 
Mainly because I'm sympathetic to the people who would prefer 
speed to precision. Not everybody needs all the precision all the 
time.


Re: Always false float comparisons

2016-05-13 Thread Ola Fosheim Grøstad via Digitalmars-d

On Saturday, 14 May 2016 at 01:26:18 UTC, Walter Bright wrote:
BTW, I once asked Prof Kahan about this. He flat out told me 
that the only reason to downgrade precision was if storage was 
tight or you needed it to run faster. I am not making this up.


He should have been aware of reproducibility since people use 
fixed point to achieve it, if he wasn't then shame on him.


In Java all compile time constants are done using strict settings 
and it provides a keyword «strictfp» to get strict behaviour for 
a particular class/function.


In C++ template parameters cannot be floating point, you use 
std::ratio so you get exact rational number instead. This is to 
avoid inaccuracy problems in the type system.


In interval-arithmetics you need to round up and down correctly 
on the bounds-computations to get correct results. (It is ok for 
the interval to be larger than the real result, but the opposite 
is a disaster).


With reproducible arithmetics you can do advanced accurate static 
analysis of programs using floating point code.


With reproducible arithmetics you can sync nodes in a cluster 
based on "time" alone, saving exchanges of data in simulations.


There are lots of reasons to default to well defined floating 
point arithmetics.




Re: Github names & avatars

2016-05-13 Thread H. S. Teoh via Digitalmars-d
On Sat, May 14, 2016 at 08:09:51AM +0300, Andrei Alexandrescu via Digitalmars-d 
wrote:
> On 5/14/16 12:01 AM, Meta wrote:
> >So many careers have been lost over some flippant tweet or Github
> >comment that complete anonymity is the only sane option, whenever
> >possible.
> 
> Could you bring some evidence or list a few anecdotes over the careers
> lost over a tweet or github comment? Thx! -- Andrei

Not sure how reliable this is, but a realtor friend of mine had a
colleague who got fired from the realtor company because of a remark
made IIRC on Facebook (or one of those social media things) about his
personal values that somebody in power in the company didn't agree with.

Not every employer cuts you slack the way we net-savvy people expect
reasonable people would. Personally, I think this kind of occurrence is
relatively rare, but still, it's very real.


T

-- 
Тише едешь, дальше будешь.


Re: Github names & avatars

2016-05-13 Thread Andrei Alexandrescu via Digitalmars-d

On 5/14/16 12:01 AM, Meta wrote:

So many careers have been lost over some flippant tweet or Github
comment that complete anonymity is the only sane option, whenever possible.


Could you bring some evidence or list a few anecdotes over the careers 
lost over a tweet or github comment? Thx! -- Andrei


[OT] Re: Github names & avatars

2016-05-13 Thread Andrei Alexandrescu via Digitalmars-d

On 5/13/16 11:54 PM, Xinok wrote:

On Friday, 13 May 2016 at 18:56:15 UTC, Walter Bright wrote:

If some company won't hire you because you contributed code to D, I'd
say you dodged a bullet working for such!


I've known a couple people who had to apply for over 200-300 positions
before they finally got a job in their field. Life isn't so convenient
that we can pick and choose which job we want. Sometimes, you've gotta
take what you can get. But suppose one of these people was a member of
the D community and they get turned down for every job they apply for
because the employer discovered something dumb they posted in this thread:

http://forum.dlang.org/thread/gpcyapiqlkpfahrzf...@forum.dlang.org

The internet never forgets so a little anonymity is a good thing.


I honestly think this concern is overrated, sometimes to the extent it 
becomes a fallacy. The converse benefits of anonymity are also 
exaggerated in my opinion. My own experience is evidence. A simple 
pattern I followed throughout is:


1. Do good work
2. Put your name next to it
3. Goto 1

I've written a large number of things by my name that I shouldn't have, 
the most epic being probably 
http://lists.boost.org/Archives/boost/2002/01/23189.php. But if the 
prevalent pattern is good work under your name, then you stand to gain a 
_lot_. People understand the occasional fluke - and this community is a 
prime example.


Your name is your brand. (In the US quite literally anybody can do 
business using their name as the company name with no extra paperwork.) 
You have the option to build your brand and walk into a room and just 
say it to earn instantly everyone's respect and attention. Or you can 
introduce yourself and then awkwardly list the various handles under you 
might also be known. I was repeatedly surprised (this week most 
recently) at the brand power my name has in the most unexpected 
circumstances.



Andrei



Re: Command line parsing

2016-05-13 Thread Andrei Alexandrescu via Digitalmars-d

On 5/13/16 2:27 PM, Russel Winder via Digitalmars-d wrote:

On Thu, 2016-05-12 at 18:25 +, Jesse Phillips via Digitalmars-d
wrote:
[…]

unknown flags harder and displaying help challenging. So I'd like
to see getopt merge with another getopt


getopt is a 1970s C solution to the problem of command line parsing.
Most programming languages have moved on from getopt and created
language-idiomatic solutions to the problem. Indeed there are other,
better solution in C now as well.


What are those and how are they better? -- Andrei



Re: Researcher question – what's the point of semicolons and curly braces?

2016-05-13 Thread Joe Duarte via Digitalmars-d

On Tuesday, 3 May 2016 at 12:47:42 UTC, qznc wrote:


The parser needs information about "blocks". Here is an example:

  if (x)
foo();
bar();

Is bar() always executed or only if (x) is true? In other 
words, is bar() part of the block, which is only entered 
conditionally?


There are three methods to communicate blocks to the compiler: 
curly braces, significant whitespace (Python, Haskell), or an 
"end" keyword (Ruby, Pascal). Which one you prefer is 
subjective.


You mention Facebook and face recognition. I have not seen 
anyone try machine learning for parsing. It would probably be a 
fun project, but not a practical one.


You wonder that understanding structured text should be a 
solved problem. It is. You need to use a formal language, which 
programming languages are. English for example is much less 
structured. There easily are ambiguities. For example:


  I saw a man on a hill with a telescope.

Who has the telescope? You or the man you saw? Who is on the 
hill?


As a programmer, I do not want to write ambiguous programs. We 
produce more than enough bugs without ambiguity.


Thanks for the example! So you laid out the three options for 
signifying blocks. Then you said which one you prefer is 
subjective, but that you don't want to write ambiguous programs. 
Do you think that the curly braces and semicolons help with that?


So in your example, I figure bar's status is language-defined, 
and programmers will be trained in the language in the same way 
they are now. I've been sketching out a new language, and there 
are a couple of ways I could see implementing this.


First, blocks of code are separated by one or more blank lines. 
No blank lines are allowed in a block. An if block would have to 
terminate in an else statement, so I think this example just 
wouldn't compile. Now if we wanted two things to happen on an if 
hit, we could leave it the way you gave where the two things are 
at the same level of indentation. That's probably what I'd settle 
on, contingent on a lot of research, including my own studies and 
other researchers', though this probably isn't one of the big 
issues. If we wanted to make the second thing conditional on 
success on the first task, then I would require another indent. 
Either way the block wouldn't compile without an else.


I've been going through a lot of Unicode, icon fonts, and the 
Noun Project, looking for clean and concise representations for 
program logic. One of the ideas I've been working with is to 
leverage Unicode arrows. In most cases it's trivial aesthetic 
clean-up, like → instead of ->, and a lot of it could be simple 
autoreplace/autocomplete in tools. For if logic, you can an 
example of bent arrows, and how I'd express the alternatives for 
your example here: 
http://i1376.photobucket.com/albums/ah13/DuartePhotos/if%20block%20with%20Unicode%20arrows_zpsnuigkkxz.png





Re: Always false float comparisons

2016-05-13 Thread Walter Bright via Digitalmars-d

On 5/13/2016 5:49 PM, Timon Gehr wrote:

Nonsense. That might be true for your use cases. Others might actually depend on
IEE 754 semantics in non-trivial ways. Higher precision for temporaries does not
imply higher accuracy for the overall computation.


Of course it implies it.

An anecdote: a colleague of mine was once doing a chained calculation. At every 
step, he rounded to 2 digits of precision after the decimal point, because 2 
digits of precision was enough for anybody. I carried out the same calculation 
to the max precision of the calculator (10 digits). He simply could not 
understand why his result was off by a factor of 2, which was a couple hundred 
times his individual roundoff error.




E.g., correctness of double-double arithmetic is crucially dependent on correct
rounding semantics for double:
https://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format#Double-double_arithmetic


Double-double has its own peculiar issues, and is not relevant to this 
discussion.



Also, it seems to me that for e.g.
https://en.wikipedia.org/wiki/Kahan_summation_algorithm,
the result can actually be made less precise by adding casts to higher precision
and truncations back to lower precision at appropriate places in the code.


I don't see any support for your claim there.



And even if higher precision helps, what good is a "precision-boost" that e.g.
disappears on 64-bit builds and then creates inconsistent results?


That's why I was thinking of putting in 128 bit floats for the compiler 
internals.



Sometimes reproducibility/predictability is more important than maybe making
fewer rounding errors sometimes. This includes reproducibility between CTFE and
runtime.


A more accurate answer should never cause your algorithm to fail. It's like 
putting better parts in your car causing the car to fail.




Just actually comply to the IEEE floating point standard when using their
terminology. There are algorithms that are designed for it and that might stop
working if the language does not comply.


Conjecture. I've written FP algorithms (from Cody+Waite, for example), and none 
of them degraded when using more precision.



Consider that the 8087 has been operating at 80 bits precision by default for 30 
years. I've NEVER heard of anyone getting actual bad results from this. They 
have complained about their test suites that tested for less accurate results 
broke. They have complained about the speed of x87. And Intel has been trying to 
get rid of the x87 forever. Sometimes I wonder if there's a disinformation 
campaign about more accuracy being bad, because it smacks of nonsense.


BTW, I once asked Prof Kahan about this. He flat out told me that the only 
reason to downgrade precision was if storage was tight or you needed it to run 
faster. I am not making this up.


Re: Always false float comparisons

2016-05-13 Thread Timon Gehr via Digitalmars-d

On 14.05.2016 02:49, Timon Gehr wrote:

result can actually be made less precise


less accurate. I need to go to sleep.


Re: Always false float comparisons

2016-05-13 Thread Timon Gehr via Digitalmars-d

On 14.05.2016 02:49, Timon Gehr wrote:

IEE


IEEE.


Re: Always false float comparisons

2016-05-13 Thread Timon Gehr via Digitalmars-d

On 13.05.2016 23:35, Walter Bright wrote:

On 5/13/2016 12:48 PM, Timon Gehr wrote:

IMO the compiler should never be allowed to use a precision different
from the one specified.


I take it you've never been bitten by accumulated errors :-)
...


If that was the case it would be because I explicitly ask for high 
precision if I need it.


If the compiler using or not using a higher precision magically fixes an 
actual issue with accumulated errors, that means the correctness of the 
code is dependent on something hidden, that you are not aware of, and 
that could break any time, for example at a time when you really don't 
have time to track it down.



Reduced precision is only useful for storage formats and increasing
speed.  If a less accurate result is desired, your algorithm is wrong.


Nonsense. That might be true for your use cases. Others might actually 
depend on IEE 754 semantics in non-trivial ways. Higher precision for 
temporaries does not imply higher accuracy for the overall computation.


E.g., correctness of double-double arithmetic is crucially dependent on 
correct rounding semantics for double:

https://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format#Double-double_arithmetic

Also, it seems to me that for e.g. 
https://en.wikipedia.org/wiki/Kahan_summation_algorithm,
the result can actually be made less precise by adding casts to higher 
precision and truncations back to lower precision at appropriate places 
in the code.


And even if higher precision helps, what good is a "precision-boost" 
that e.g. disappears on 64-bit builds and then creates inconsistent results?


Sometimes reproducibility/predictability is more important than maybe 
making fewer rounding errors sometimes. This includes reproducibility 
between CTFE and runtime.


Just actually comply to the IEEE floating point standard when using 
their terminology. There are algorithms that are designed for it and 
that might stop working if the language does not comply.


Then maybe add additional built-in types with a given storage size that 
additionally /guarantee/ a certain amount of additional scratch space 
when used for function-local computations.


Re: Always false float comparisons

2016-05-13 Thread Walter Bright via Digitalmars-d

On 5/13/2016 2:42 PM, Ola Fosheim Grøstad wrote:

On Friday, 13 May 2016 at 21:36:52 UTC, Walter Bright wrote:

On 5/13/2016 1:57 PM, Ola Fosheim Grøstad wrote:

It should in C++ with the right strict-settings,


Consider what the C++ Standard says, not what the endless switches to tweak
the compiler do.


The C++ standard cannot even require IEEE754. Nobody relies only on what the C++
standard says in real projects. They rely on what the chosen compiler(s) on
concrete platform(s) do.



Nevertheless, C++ is what the Standard says it is. If Brand X compiler does 
something else, you should call it "Brand X C++".


Re: Potential issue with DMD where the template constrains are not evaluated early enough to prevent type recursion

2016-05-13 Thread Timon Gehr via Digitalmars-d

On 13.05.2016 23:21, Georgi D wrote:

Hi,

I have the following code which should compile in my opinion:

struct Foo {}

import std.range.primitives;
import std.algorithm.iteration : map, joiner;

auto toChars(R)(R r) if (isInputRange!R)
{
return r.map!(toChars).joiner(", ");
}

auto toChars(Foo f)
{
import std.range : chain;
return chain("foo", "bar");
}

void main()
{
import std.range : repeat;
Foo f;
auto r = f.repeat(3);
auto chars = r.toChars();
}

But fails to compile with the following error:

Error: template instance std.algorithm.iteration.MapResult!(toChars,
Take!(Repeat!(Foo))) forward reference of function toChars

The reason it fails to compile in my opinion is that the template
constraint fails to remove the generic toChars from the list possible
matches early enough so the compiler thinks there is a recursive call
and cannot deduce the return type.


It's tricky. The reason it fails to compile is that the template 
argument you are passing does not actually refer to the overload set.


return r.map!(.toChars).joiner(", "); works.



Consider:

int foo()(){
pragma(msg, typeof(&foo)); // int function()
return 2;
}
double foo(){
return foo!();
}

The reason for this behavior is that the first declaration is syntactic 
sugar for:


template foo(){
int foo(){
pragma(msg, typeof(&foo));
return 2;
}
}

Since template foo() introduces a scope, the inner 'int foo()' shadows 
the outer 'double foo()'. There are special cases in the compiler that 
reverse eponymous lookup before overload resolution (i.e. go from 
foo!().foo back to foo) in case some identifier appears in the context 
ident() or ident!(), so one does not usually run into this. This is not 
done for alias parameters.


The error message is bad though. Also, I think it is not unreasonable to 
expect the code to work. Maybe reversal of eponymous lookup should be 
done for alias parameters too.


Re: Github names & avatars

2016-05-13 Thread Walter Bright via Digitalmars-d

On 5/13/2016 1:54 PM, Xinok wrote:

I've known a couple people who had to apply for over 200-300 positions before
they finally got a job in their field. Life isn't so convenient that we can pick
and choose which job we want. Sometimes, you've gotta take what you can get.


Ironically, hiding contributions under a pseudonym may make one a less desirable 
candidate because nobody will know that you're any good.




But suppose one of these people was a member of the D community and they get 
turned
down for every job they apply for because the employer discovered something dumb
they posted in this thread:

http://forum.dlang.org/thread/gpcyapiqlkpfahrzf...@forum.dlang.org

The internet never forgets so a little anonymity is a good thing.


Note that this is a professional forum, not a chat room. I have suggested many 
times that people maintain a professional decorum here, i.e. don't post things 
that are unacceptable to say at work.


1. Using a pseudonym here is not license to be a jerk

2. It's not that hard to adhere to a professional standard of conduct

3. If you want to vent about politics and religion, reddit is just a click away

4. Consider your name as your professional brand. By posting and githubbing 
under your name, there's a significant opportunity to enhance your brand, which 
translates into being able to get better jobs at higher pay. Anonymity is a fine 
way to have to send out hundreds of resumes to get a job. Being a well-known 
contributor to a prestigious project is a shortcut to better things.




Re: The Case Against Autodecode

2016-05-13 Thread H. S. Teoh via Digitalmars-d
On Fri, May 13, 2016 at 09:26:40PM +0200, Marco Leise via Digitalmars-d wrote:
> Am Fri, 13 May 2016 10:49:24 +
> schrieb Marc Schütz :
> 
> > In fact, even most European languages are affected if NFD 
> > normalization is used, which is the default on MacOS X.
> > 
> > And this is actually the main problem with it: It was introduced 
> > to make unicode string handling correct. Well, it doesn't, 
> > therefore it has no justification.
> 
> +1 for leaning back and contemplate exactly what auto-decode
> was aiming for and how it missed that goal.
> 
> You'll see that an ö may still be cut between the o and the ¨.
> Hangul symbols are composed of pieces that go in different
> corners. Those would also be split up by auto-decode.
> 
> Can we handle real world text AT ALL? Are graphemes good
> enough to find the column in a fixed width display of some
> string (e.g. line+column or an error)? No, there my still be
> full-width characters in there that take 2 columns. :p
[...]

A simple lookup table ought to fix this. Preferably in std.uni so that
it doesn't get reinvented by every other project.


T

-- 
Don't modify spaghetti code unless you can eat the consequences.


Re: Follow-up post explaining research rationale

2016-05-13 Thread QAston via Digitalmars-d

On Monday, 9 May 2016 at 19:09:35 UTC, Joe Duarte wrote:
I'm interested in people's *first encounters* with programming, 
in high school or college, how men and women might 
differentially assess programming as a career option, and why.


My motivation was that computers were awesome and I wanted to 
work with those machines no matter what. I had no internet when I 
was learning programming, that resulted in one of the least 
optimal path to learn programming - by an introductory course to 
x86 assembly programming from a journal I had. It was anything 
but intuitive. Many people I know started with manuals to their 8 
bit computers, which was probably even less intuitive. So, 
anecdotally speaking, motivation is the key.




One D-specific question I do have: Have any women ever posted 
here? I scoured a bunch of threads here recently and couldn't 
find a female poster. By this I mean a poster whose supplied 
name was female, where a proper name was supplied (some people 
just have usernames). Of course we don't really know who is 
posting, and there could be some George Eliot situations, but 
the presence/absence of self-identified women is useful enough. 
Women are underrepresented in programming, but the skew in 
online programming communities is even more extreme – we're 
seeing near-zero percent in lots of boards. This is not a 
D-specific problem. Does anyone know of occasions where women 
posted here? Links?


There are women on these forums, they just apparently don't feel 
the urge to disclose their sex with every post so that curious 
social scientists can count them.


I'm interested in monocultures and diversity issues in a number 
of domains. I've done some recent work on the lack of 
philosophical and political diversity in social science, 
particularly in social psychology, and how this has undermined 
the quality and validity of our research (here's a recent paper 
by me and my colleagues in Behavioral and Brain Sciences: 
http://dx.doi.org/10.1017/S0140525X14000430). My interest in 
the lack of gender diversity in programming is an entirely 
different research area, but there isn't much rigorous social 
science and cognitive psychology research on this topic, which 
surprised me. I think it's an important and interesting issue. 
I also think a lot of the diversity efforts that are salient in 
tech right now are acting far too late in the cycle, sort of 
just waiting for women and minorities to show up. The skew 
starts long before people graduate with a CS degree, and I 
think Google, Microsoft, Apple, Facebook, et al. should think 
deeply about how programming language design might be 
contributing to these effects (especially before they roll out 
any more C-like programming languages).


It seems odd that the abstract of your paper is about the 
ideological diversity, but here you focus on gender diversity. 
Diversity of ideas is important, though you don't show the link 
between gender diversity and diversity of ideas.


Informally, I think what's happening in many cases is that when 
smart women are exposed to programming, it looks ridiculous and 
they think something like "Screw this – I'm going to med 
school", or any of a thousand permutations of that sentiment.


Mainstream PL syntax is extremely unintuitive and poorly 
designed by known pedagogical, epistemological, and 
communicative science standards. The vast majority people who 
are introduced to programming do not pursue it (likely true of 
many fields, but programming may see a smaller grab than most – 
this point requires a lot more context). I'm open to the 
possibility that the need to master the bizarre syntax of 
incumbent programming languages might serve as a useful filter 
for qualities valuable in a programmer, but I'm not sure how 
good or precise the filter is.


Your research seems to have a very big ommission: textual 
representation is not only representation of programs - therfore 
programming doesn't have to have syntax. The first programming 
enviroment I was introduced to was an executable flowchart 
enviroment.


I've been doing courses for lego robotics programming for 
children(7-14yr olds, both boys and girls, though more boys 
attended the courses) using NXT-G programming environment. Lots 
of fun, mostly teaching by examples, multiple teachers so that we 
could keep up with those kids.


The oldest ones could also attend a course which would teach them 
how to do those things in a C-like programming language (NXC). 
Kids had much more difficulty with that, because writing a 
program in text requires making much more decisions and effort, 
for no return noticable for them. And we couldn't go through 
theory (like, what are expressions, etc) because these were just 
children not used to lectures:P


1. There's no clear distinction between types and names. It's 
just plain text run-on phrases like "char string". string is an 
unfortunate name here, and reminds us that this would be a type 
in many mod

Re: The Case Against Autodecode

2016-05-13 Thread Jonathan M Davis via Digitalmars-d
On Friday, May 13, 2016 12:52:13 Kagamin via Digitalmars-d wrote:
> On Friday, 13 May 2016 at 10:38:09 UTC, Jonathan M Davis wrote:
> > IIRC, Andrei talked in TDPL about how Java's choice to go with
> > UTF-16 was worse than the choice to go with UTF-8, because it
> > was correct in many more cases
>
> UTF-16 was a migration from UCS-2, and UCS-2 was superior at the
> time.

The history of why UTF-16 was chosen isn't really relevant to my point
(Win32 has the same problem as Java and for similar reasons).

My point was that if you use UTF-8, then it's obvious _really_ fast when you
screwed up Unicode-handling by treating a code unit as a character, because
anything beyond ASCII is going to fall flat on its face. But with UTF-16, a
_lot_ more code units are representable as a single code point - as well as
a single grapheme - so it's far easier to write code that treats a code unit
as if it were a full character without realizing that you're screwing it up.
UTF-8 is fail-fast in this regard, whereas UTF-16 is not.

UTF-32 takes that problem to a new level, because now you'll only notice
problems when you're dealing with a grapheme constructed of multiple code
points. So, odds are that even if you test with Unicode strings, you won't
catch the bugs. It'll work 99% of the time, and you'll get subtle bugs the
rest of the time.

There are reasons to operate at the code point level, but in general, you
either want to be operating at the code unit level or the grapheme level,
not the code point level, and if you don't know what you're doing, then
anything other than the grapheme level is likely going to be wrong if you're
manipulating individual characters. Fortunately, a lot of string processing
doesn't need to operate on individual characters and as long as the standard
library functions get it right, you'll tend to be okay, but still, operating
at the code point level is almost always wrong, and it's even harder to
catch when it's wrong than when treating UTF-16 code units as characters.

- Jonathan M Davis



Re: Always false float comparisons

2016-05-13 Thread Ola Fosheim Grøstad via Digitalmars-d

On Friday, 13 May 2016 at 21:36:52 UTC, Walter Bright wrote:

On 5/13/2016 1:57 PM, Ola Fosheim Grøstad wrote:

It should in C++ with the right strict-settings,


Consider what the C++ Standard says, not what the endless 
switches to tweak the compiler do.


The C++ standard cannot even require IEEE754. Nobody relies only 
on what the C++ standard says in real projects. They rely on what 
the chosen compiler(s) on concrete platform(s) do.




Re: Always false float comparisons

2016-05-13 Thread Walter Bright via Digitalmars-d

On 5/13/2016 1:57 PM, Ola Fosheim Grøstad wrote:

It should in C++ with the right strict-settings,


Consider what the C++ Standard says, not what the endless switches to tweak the 
compiler do.




Re: Always false float comparisons

2016-05-13 Thread Walter Bright via Digitalmars-d

On 5/13/2016 12:48 PM, Timon Gehr wrote:

IMO the compiler should never be allowed to use a precision different from the
one specified.


I take it you've never been bitten by accumulated errors :-)

Reduced precision is only useful for storage formats and increasing speed. If a 
less accurate result is desired, your algorithm is wrong.


Re: The Case Against Autodecode

2016-05-13 Thread Steven Schveighoffer via Digitalmars-d

On 5/13/16 5:25 PM, Alex Parrill wrote:

On Friday, 13 May 2016 at 16:05:21 UTC, Steven Schveighoffer wrote:

On 5/12/16 4:15 PM, Walter Bright wrote:


10. Autodecoded arrays cannot be RandomAccessRanges, losing a key
benefit of being arrays in the first place.


I'll repeat what I said in the other thread.

The problem isn't auto-decoding. The problem is hijacking the char[]
and wchar[] (and variants) array type to mean autodecoding non-arrays.

If you think this code makes sense, then my definition of sane varies
slightly from yours:

static assert(!hasLength!R && is(typeof(R.init.length)));
static assert(!is(ElementType!R == R.init[0]));
static assert(!isRandomAccessRange!R && is(typeof(R.init[0])) &&
is(typeof(R.init[0 .. $])));

I think D would be fine if string meant some auto-decoding struct with
an immutable(char)[] array backing. I can accept and work with that. I
can transform that into a char[] that makes sense if I have no use for
auto-decoding. As of today, I have to use byCodePoint, or
.representation, etc. and it's very unwieldy.

If I ran D, that's what I would do.



Well, the "auto" part of autodecoding means "automatically doing it for
plain strings", right? If you explicitly do decoding, I think it would
just be "decoding"; there's no "auto" part.


No, the problem isn't the auto-decoding. The problem is having *arrays* 
do that. Sometimes.


I would be perfectly fine with a custom string type that all string 
literals were typed as, as long as I can get a sanely behaving array out 
of it.



I doubt anyone is going to complain if you add in a struct wrapper
around a string that iterates over code units or graphemes. The issue
most people have, as you say, is the fact that the default for strings
is to decode.


I want to clarify that I don't really care if strings by default 
auto-decode. I think that's fine. What I dislike is that 
immutable(char)[] auto-decodes.


-Steve


Re: documented unit tests as examples

2016-05-13 Thread Steven Schveighoffer via Digitalmars-d

On 5/13/16 5:16 PM, Adam D. Ruppe wrote:

On Friday, 13 May 2016 at 20:39:56 UTC, Steven Schveighoffer wrote:

Thoughts?


Yeah, I'm not a big fan of this either. A lot of in-comment examples
have been moved to unittests lately, and I think it is a net negative:

* Running it gives silent output
* Data representation in source isn't always instructive
* Assert just kinda looks weird

The one pro would be that it is automatically tested... but is it?
Consider the following:

---
import std.stdio;

///
unittest {
writeln("Hello, world!");
}
---

That passes the test, but if the user copy/pasted the example, it
wouldn't actually compile because of the missing import.


This isn't tested, even for DDOC examples. So I'm not super concerned 
about it.


A unit test can use a private module symbol and that won't work in user 
code either.



Certainly, some surrounding boilerplate is expected much of the time,
but the unittest doesn't even prove it actually runs with the same
user-expected surrounding code. It just proves it runs from the
implementation module: it can use private imports, private functions,
and more.

So it is a dubious win for automatic testing too.


The idea is to make sure examples actually compile and run in SOME context.

To get down to the lowest level, you can tell someone to copy and paste 
an example in notepad, and run dmd on it, but if they haven't yet 
downloaded dmd, then it won't work ;) It's impossible to test what the 
user is going to do or know before hand.



D's documented unittests are somewhere in the middle... and I think
fails to capture the advantages of either extreme.


I'm not knocking documented unit tests. Certainly, without documented 
unit tests, the situation before was that examples were riddled with 
bugs. What I'm pointing out here is that DDOC examples have some 
advantages that unittests cannot currently use.


A potential way to fix this may be marking a unit test as being a 
complete example program that assumes the user has installed proper 
access to the library. Then it won't compile unless you add the correct 
imports, and it's treated as if it were in a separate module (no private 
symbol access). This is probably the closest we can get to simulating a 
user copying an example unit test into his own file and trying to run it.


-Steve


Re: The Case Against Autodecode

2016-05-13 Thread Alex Parrill via Digitalmars-d
On Friday, 13 May 2016 at 16:05:21 UTC, Steven Schveighoffer 
wrote:

On 5/12/16 4:15 PM, Walter Bright wrote:

10. Autodecoded arrays cannot be RandomAccessRanges, losing a 
key

benefit of being arrays in the first place.


I'll repeat what I said in the other thread.

The problem isn't auto-decoding. The problem is hijacking the 
char[] and wchar[] (and variants) array type to mean 
autodecoding non-arrays.


If you think this code makes sense, then my definition of sane 
varies slightly from yours:


static assert(!hasLength!R && is(typeof(R.init.length)));
static assert(!is(ElementType!R == R.init[0]));
static assert(!isRandomAccessRange!R && is(typeof(R.init[0])) 
&& is(typeof(R.init[0 .. $])));


I think D would be fine if string meant some auto-decoding 
struct with an immutable(char)[] array backing. I can accept 
and work with that. I can transform that into a char[] that 
makes sense if I have no use for auto-decoding. As of today, I 
have to use byCodePoint, or .representation, etc. and it's very 
unwieldy.


If I ran D, that's what I would do.

-Steve


Well, the "auto" part of autodecoding means "automatically doing 
it for plain strings", right? If you explicitly do decoding, I 
think it would just be "decoding"; there's no "auto" part.


I doubt anyone is going to complain if you add in a struct 
wrapper around a string that iterates over code units or 
graphemes. The issue most people have, as you say, is the fact 
that the default for strings is to decode.




Potential issue with DMD where the template constrains are not evaluated early enough to prevent type recursion

2016-05-13 Thread Georgi D via Digitalmars-d

Hi,

I have the following code which should compile in my opinion:

struct Foo {}

import std.range.primitives;
import std.algorithm.iteration : map, joiner;

auto toChars(R)(R r) if (isInputRange!R)
{
   return r.map!(toChars).joiner(", ");
}

auto toChars(Foo f)
{
   import std.range : chain;
   return chain("foo", "bar");
}

void main()
{
   import std.range : repeat;
   Foo f;
   auto r = f.repeat(3);
   auto chars = r.toChars();
}

But fails to compile with the following error:

Error: template instance 
std.algorithm.iteration.MapResult!(toChars, Take!(Repeat!(Foo))) 
forward reference of function toChars


The reason it fails to compile in my opinion is that the template 
constraint fails to remove the generic toChars from the list 
possible matches early enough so the compiler thinks there is a 
recursive call and cannot deduce the return type.


Just changing the name of the generic toChars to anything else 
makes the code compile:


struct Foo {}

import std.range.primitives;
import std.algorithm.iteration : map, joiner;

auto toCharsRange(R)(R r) if (isInputRange!R)
{
   return r.map!(toChars).joiner(", ");
}

auto toChars(Foo f)
{
   import std.range : chain;
   return chain("foo", "bar");
}

void main()
{
   import std.range : repeat;
   Foo f;
   auto r = f.repeat(3);
   auto chars = r.toCharsRange();
}

Compiles just fine.





Re: documented unit tests as examples

2016-05-13 Thread Jonathan M Davis via Digitalmars-d
On Friday, May 13, 2016 16:39:56 Steven Schveighoffer via Digitalmars-d wrote:
> Just looking at this PR: https://github.com/dlang/phobos/pull/4319
>
> Now the example, instead of running and producing output (i.e. visual
> feedback) that the program is doing something, just runs and creates no
> feedback.
>
> I'm wondering if we can have a mechanism for documented unit tests to
> have a slightly different showing inside the docs vs. the actual unit test.
>
> For example, let's say we have a function writelnAssert. Used like this:
>
> writelnAssert(someText, "Text You Expect To Output");
>
> When running this function, it's basically just an assert that someText
> == the expected text. However, when DDOC creates the document for this,
> it says:
>
> writeln(someText); // "Text You Expect To Output"
>
> This way, we are actually testing the output, but at the same time,
> giving someone playing with the example the tools to see some feedback.
>
> Thoughts?

I confess that I really don't have a problem with assertions being used in
examples like they normally are. That's how unit tests work, and anyone who
uses D for any length of time is going to find that out quickly. And it's
not like you can see the result of writeln anyway unless you copy-pase the
code into an editor to run it yourself, and if you do that, you can replace
the assertion with a call to writeln easily enough. And the assertion can
actually show you the output in the documentation, whereas you won't get
that with writeln. So, if anything, it seems to me that having assertions
that show you the values is better than having to copy-paste the code into
an editor to run it and see the output.

What's a bigger problem is when you want an example that does something like
talk to a website (like std.net.curl does), since it's frequently not
reasonable to have that in unit tests. Fortunately, it's still perfectly
possible to have an untested example if you need it, and it's not usually
needed.

- Jonathan M Davis



Re: documented unit tests as examples

2016-05-13 Thread Adam D. Ruppe via Digitalmars-d
On Friday, 13 May 2016 at 20:39:56 UTC, Steven Schveighoffer 
wrote:

Thoughts?


Yeah, I'm not a big fan of this either. A lot of in-comment 
examples have been moved to unittests lately, and I think it is a 
net negative:


* Running it gives silent output
* Data representation in source isn't always instructive
* Assert just kinda looks weird

The one pro would be that it is automatically tested... but is 
it? Consider the following:


---
import std.stdio;

///
unittest {
   writeln("Hello, world!");
}
---

That passes the test, but if the user copy/pasted the example, it 
wouldn't actually compile because of the missing import.


Certainly, some surrounding boilerplate is expected much of the 
time, but the unittest doesn't even prove it actually runs with 
the same user-expected surrounding code. It just proves it runs 
from the implementation module: it can use private imports, 
private functions, and more.


So it is a dubious win for automatic testing too.



I prefer examples that you can copy/paste and get something 
useful out of more than documented unittests. I actually really 
like either one-liners showing the syntax and/or complete 
programs that do something - MSDN is good about having those 
kinds of examples.


D's documented unittests are somewhere in the middle... and I 
think fails to capture the advantages of either extreme.


Re: Github names & avatars

2016-05-13 Thread Jonathan M Davis via Digitalmars-d
On Friday, May 13, 2016 19:53:18 bitwise via Digitalmars-d wrote:
> May be worth mentioning archiving sites like gmane that seem to
> love making your stupid questions/statements your #1 google
> search result. Excellent way to make an impression on future
> employers... ;)

Actually, the fact that I had fairly high reputation on stackoverflow helped
me get my current job, and I've had other companies looked favoribly on the
fact that I have activity on github. I'm even amazingly searchable given how
common all of my names are thanks primarily to this newsgroup. You'll
probably find me fairly quickly if you search for

Jonathan M Davis programming

and this in spite of the fact that Jonathan and Davis are both _very_ common
(and my middle name, Michael, is just as bad). I definitely think that
having a visible presence on sites like stackoverflow and github is good,
and if they have your real name (or something close to it) with your real
photo, it's a lot easier to show that it's really you.

Sure, some folks may want to stay more anonymous, and that's there
prerogative, but in my experience, having a visible presence online with
regards to programming is definitely an aid in getting employment. Potential
employers can actually see that you know something and that other
programmers think that you know something, whereas they can't if you do
everything online under a pseudonym and/or never contribute to projects
online or make any of your own code available online.

As to Walter's original point of recognizing folks, it's definitely nicer
when contributors use the same names in the newsgroup and on github (be they
their real names or not). Otherwise, it _can_ be a pain to figure out that
they're the same person. For frequent contributors, you tend to figure it
out and remember it, but even then, it's more work than when the names match
- and if the contributor is not a frequent contributor, then it's unlikely
that any connection is going to be made, and it's going to seem like they
came out of nowhere when they might actually be someone who posts in the
newsgroup semi-frequently.

- Jonathan M Davis



Re: Github names & avatars

2016-05-13 Thread Meta via Digitalmars-d

On Friday, 13 May 2016 at 18:56:15 UTC, Walter Bright wrote:
If some company won't hire you because you contributed code to 
D, I'd say you dodged a bullet working for such!


  When I was young, I worried about what other people thought 
of me.
  When I was middle aged, I stopped caring what other people 
thought of me.

  When I was old, I realized nobody thought about me.


Unfortunately, you can't just say whatever you want nowadays and 
expect people to respect your freedom to do so. So many careers 
have been lost over some flippant tweet or Github comment that 
complete anonymity is the only sane option, whenever possible.


Re: documented unit tests as examples

2016-05-13 Thread Steven Schveighoffer via Digitalmars-d

On 5/13/16 4:55 PM, Meta wrote:


When I was new to D and I first saw the `assert(...)` idiom in an
example in the documentation, it confused me for a minute or two, but if
you know what `assert` does you can quickly wrap your head around the
fact that it's both a test and an example. This would benefit users that
are completely new to programming in general, however.


Given the fact that asserts aren't always run, it's never comforting to 
me to run a program that tests something and have it give NO feedback. 
In fact, I frequently find myself triggering the assert to make sure 
it's actually being run (and I've caught the build not actually running 
it many times).


This has a negative affect on anyone actually looking to see how a D 
function works. I can write a program that does nothing easily enough, 
why such a complicated example?


-Steve


Re: Always false float comparisons

2016-05-13 Thread Ola Fosheim Grøstad via Digitalmars-d

On Friday, 13 May 2016 at 18:16:29 UTC, Walter Bright wrote:
Please have the frontend behave such that it operates on the 
precise
datatype expressed by the type... the backend probably does 
this too,

and runtime certainly does; they all match.


Except this never happens anyway.


It should in C++ with the right strict-settings, which makes the 
compiler use reproducible floating point operations. AFAIK it 
should work out even in modern JavaScript.




Re: documented unit tests as examples

2016-05-13 Thread Meta via Digitalmars-d
On Friday, 13 May 2016 at 20:39:56 UTC, Steven Schveighoffer 
wrote:
Just looking at this PR: 
https://github.com/dlang/phobos/pull/4319


Now the example, instead of running and producing output (i.e. 
visual feedback) that the program is doing something, just runs 
and creates no feedback.


I'm wondering if we can have a mechanism for documented unit 
tests to have a slightly different showing inside the docs vs. 
the actual unit test.


For example, let's say we have a function writelnAssert. Used 
like this:


writelnAssert(someText, "Text You Expect To Output");

When running this function, it's basically just an assert that 
someText == the expected text. However, when DDOC creates the 
document for this, it says:


writeln(someText); // "Text You Expect To Output"

This way, we are actually testing the output, but at the same 
time, giving someone playing with the example the tools to see 
some feedback.


Thoughts?

-Steve


When I was new to D and I first saw the `assert(...)` idiom in an 
example in the documentation, it confused me for a minute or two, 
but if you know what `assert` does you can quickly wrap your head 
around the fact that it's both a test and an example. This would 
benefit users that are completely new to programming in general, 
however.


Re: Github names & avatars

2016-05-13 Thread Xinok via Digitalmars-d

On Friday, 13 May 2016 at 18:56:15 UTC, Walter Bright wrote:
If some company won't hire you because you contributed code to 
D, I'd say you dodged a bullet working for such!


I've known a couple people who had to apply for over 200-300 
positions before they finally got a job in their field. Life 
isn't so convenient that we can pick and choose which job we 
want. Sometimes, you've gotta take what you can get. But suppose 
one of these people was a member of the D community and they get 
turned down for every job they apply for because the employer 
discovered something dumb they posted in this thread:


http://forum.dlang.org/thread/gpcyapiqlkpfahrzf...@forum.dlang.org

The internet never forgets so a little anonymity is a good thing.


documented unit tests as examples

2016-05-13 Thread Steven Schveighoffer via Digitalmars-d

Just looking at this PR: https://github.com/dlang/phobos/pull/4319

Now the example, instead of running and producing output (i.e. visual 
feedback) that the program is doing something, just runs and creates no 
feedback.


I'm wondering if we can have a mechanism for documented unit tests to 
have a slightly different showing inside the docs vs. the actual unit test.


For example, let's say we have a function writelnAssert. Used like this:

writelnAssert(someText, "Text You Expect To Output");

When running this function, it's basically just an assert that someText 
== the expected text. However, when DDOC creates the document for this, 
it says:


writeln(someText); // "Text You Expect To Output"

This way, we are actually testing the output, but at the same time, 
giving someone playing with the example the tools to see some feedback.


Thoughts?

-Steve


Re: The Case Against Autodecode

2016-05-13 Thread Jonathan M Davis via Digitalmars-d
On Friday, May 13, 2016 11:00:19 Marc Schütz via Digitalmars-d wrote:
> On Friday, 13 May 2016 at 10:38:09 UTC, Jonathan M Davis wrote:
> > Ideally, algorithms would be Unicode aware as appropriate, but
> > the default would be to operate on code units with wrappers to
> > handle decoding by code point or grapheme. Then it's easy to
> > write fast code while still allowing for full correctness.
> > Granted, it's not necessarily easy to get correct code that
> > way, but anyone who wants fully correctness without caring
> > about efficiency can just use ranges of graphemes. Ranges of
> > code points are rare regardless.
>
> char[], wchar[] etc. can simply be made non-ranges, so that the
> user has to choose between .byCodePoint, .byCodeUnit (or
> .representation as it already exists), .byGrapheme, or even
> higher-level units like .byLine or .byWord. Ranges of char, wchar
> however stay as they are today. That way it's harder to
> accidentally get it wrong.

It also means yet more special cases. You have arrays which aren't treated
as ranges when every other type of array out there is treated as a range.
And even if that's what we want to do, there isn't really a clean
deprecation path.

> There is a simple deprecation path that's already been suggested.
> `isInputRange` and friends can output a helpful deprecation
> warning when they're called with a range that currently triggers
> auto-decoding.

How would you put a deprecation message inside of an eponymous template like
isInputRange?  Deprecation messages are triggered when a symbol is used, not
when it passes or fails a static if inside of a template. And even if we did
something like put a pragma in isInputRange, you'd get a _flood_ of messages
in any program that does much of anything with ranges and strings. It's a
possible path, but it sure isn't a pretty one. Honestly, I'd have to wonder
whether just outright breaking code would be better.

- Jonathan M Davis




Re: Github names & avatars

2016-05-13 Thread bitwise via Digitalmars-d

On Friday, 13 May 2016 at 17:02:20 UTC, Walter Bright wrote:
I'll ask again that the active Github users use their own name, 
and add to that if you could have a selfie as your github image.


It avoids when people who post as "Fred" on the newsgroup 
submit PRs as "HorseWrangler" and get annoyed when I don't 
realize they are the same person, and then I overlook them at 
the conference because I have no idea what they look like.


In today's surveillance state, the government already knows 
your name and what you look like, so being anonymous on github 
is a bit pointless, as if anyone cares that you are interested 
in D. I can understand if you're a celebrity or want nobody to 
know you're a dog, but that doesn't apply to most of us.


https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_you%27re_a_dog


May be worth mentioning archiving sites like gmane that seem to 
love making your stupid questions/statements your #1 google 
search result. Excellent way to make an impression on future 
employers... ;)


Re: Github names & avatars

2016-05-13 Thread Max Samukha via Digitalmars-d

On Friday, 13 May 2016 at 18:56:15 UTC, Walter Bright wrote:


  When I was old, I realized nobody thought about me.



I did. Then I saw this 
http://joshworth.com/dev/pixelspace/pixelspace_solarsystem.html 
and stopped.



Seriously, captcha that requires programmers to do basic 
arithmetics is a horrible idea. It's like having a gynecologist 
watch porn. To make it worse, DPaste doesn't compile the code as 
is.


Re: Always false float comparisons

2016-05-13 Thread Timon Gehr via Digitalmars-d

On 13.05.2016 21:25, Walter Bright wrote:

On 5/12/2016 4:06 PM, Marco Leise wrote:

Am Mon, 9 May 2016 04:26:55 -0700
schrieb Walter Bright :


I wonder what's the difference between 1.30f and cast(float)1.30.


There isn't one.


Oh yes, there is! Don't you love floating-point...

cast(float)1.30 rounds twice, first from a base-10
representation to a base-2 double value and then again to a
float. 1.30f directly converts to float.


This is one reason why the compiler carries everything internally to 80
bit precision, even if they are typed as some other precision. It avoids
the double rounding.



IMO the compiler should never be allowed to use a precision different 
from the one specified.


Re: The Case Against Autodecode

2016-05-13 Thread Iakh via Digitalmars-d

On Friday, 13 May 2016 at 01:00:54 UTC, Walter Bright wrote:

On 5/12/2016 5:47 PM, Jack Stouffer wrote:
D is much less popular now than was Python at the time, and 
Python 2 problems
were more straight forward than the auto-decoding problem.  
You'll need a very
clear migration path, years long deprecations, and automatic 
tools in order to
make the transition work, or else D's usage will be 
permanently damaged.


I agree, if it is possible at all.


A plan:
1. Mark as deprecated places where auto-decoding used. I
think it's all "range" functions for string(front, popFront, 
back, ...).

Force using byChar & co.

2. Introduce new String type in Phobos.

3. After ages make immutable(char) ordinal array.

Is it OK? Profit?


Re: Always false float comparisons

2016-05-13 Thread Walter Bright via Digitalmars-d

On 5/12/2016 4:06 PM, Marco Leise wrote:

Am Mon, 9 May 2016 04:26:55 -0700
schrieb Walter Bright :


I wonder what's the difference between 1.30f and cast(float)1.30.


There isn't one.


Oh yes, there is! Don't you love floating-point...

cast(float)1.30 rounds twice, first from a base-10
representation to a base-2 double value and then again to a
float. 1.30f directly converts to float.


This is one reason why the compiler carries everything internally to 80 bit 
precision, even if they are typed as some other precision. It avoids the double 
rounding.




Re: The Case Against Autodecode

2016-05-13 Thread Marco Leise via Digitalmars-d
Am Fri, 13 May 2016 10:49:24 +
schrieb Marc Schütz :

> In fact, even most European languages are affected if NFD 
> normalization is used, which is the default on MacOS X.
> 
> And this is actually the main problem with it: It was introduced 
> to make unicode string handling correct. Well, it doesn't, 
> therefore it has no justification.

+1 for leaning back and contemplate exactly what auto-decode
was aiming for and how it missed that goal.

You'll see that an ö may still be cut between the o and the ¨.
Hangul symbols are composed of pieces that go in different
corners. Those would also be split up by auto-decode.

Can we handle real world text AT ALL? Are graphemes good
enough to find the column in a fixed width display of some
string (e.g. line+column or an error)? No, there my still be
full-width characters in there that take 2 columns. :p

-- 
Marco



Re: Github names & avatars

2016-05-13 Thread Walter Bright via Digitalmars-d

On 5/13/2016 11:25 AM, jmh530 wrote:

My reason for anonymity is that I ask stupid questions that I don't necessarily
want associated with my real name.


I've said a lot of stupid things on the internet. I just stopped worrying about 
it :-) If someone wants to think poorly of me over such, my attitude is f*** 'em.


I'm not demanding you use your real name, just suggesting.


Re: Github names & avatars

2016-05-13 Thread Walter Bright via Digitalmars-d

On 5/13/2016 11:21 AM, Xinok wrote:

There are companies which specialize in doing background
checks on potential hires and they'll dig up every little secret they can find
about you online.  While you may think an association with D isn't a big deal,
some people out there can be extremely bigoted and will judge you based on that
alone.


If some company won't hire you because you contributed code to D, I'd say you 
dodged a bullet working for such!


  When I was young, I worried about what other people thought of me.
  When I was middle aged, I stopped caring what other people thought of me.
  When I was old, I realized nobody thought about me.



It goes both ways as well; if one deplorable individual happens to get a
lot of public attention and he becomes associated with the D community (oh how
the media loves to spin a story), it will reflect poorly on our community as a
whole.


I know this happens in politics (queue Trump's problem with one of his 
superdelegates) but I'd hate to tell someone they have to hide their identity 
because they are unpopular.




So no, let's not start using our real names just because you fail to recognize
some people.


If you feel strongly about it, then don't.


Re: Github names & avatars

2016-05-13 Thread Mike Parker via Digitalmars-d

On Friday, 13 May 2016 at 18:21:59 UTC, Xinok wrote:



So no, let's not start using our real names just because you 
fail to recognize some people.


That's not at all what he said. At any rate, it's a simple 
request and you need not follow it.


For the record, I've used @aldacron everywhere except these 
forums since I've been involved with D (and before). I changed my 
github handle to @mdparker during DConf, as I intend to be more 
active in submitting documentation PRs, and to help alleviate the 
confusing situation where people knew me as one or the other but 
not both. It's quite difficult to find a real-name handle on 
github when you have three of the most common names in the 
English language!


Re: Github names & avatars

2016-05-13 Thread jmh530 via Digitalmars-d

On Friday, 13 May 2016 at 17:52:34 UTC, Walter Bright wrote:


It's a suggestion, not a requirement. I respect that some 
people have good reasons for anonymity.


My reason for anonymity is that I ask stupid questions that I 
don't necessarily want associated with my real name.


Re: The Case Against Autodecode

2016-05-13 Thread H. S. Teoh via Digitalmars-d
On Fri, May 13, 2016 at 12:16:30PM +, Nick Treleaven via Digitalmars-d 
wrote:
> On Friday, 13 May 2016 at 00:47:04 UTC, Jack Stouffer wrote:
> >If you're serious about removing auto-decoding, which I think you and
> >others have shown has merits, you have to the THE SIMPLEST migration
> >path ever, or you will kill D. I'm talking a simple press of a
> >button.
> 
> char[] is always going to be unsafe for UTF-8. I don't think we can
> remove it or auto-decoding, only discourage use of it. We need a
> String struct IMO, without length or indexing. Its front can do
> autodecoding, and it has a ubyte[] raw() property too. (Possibly the
> byte length of front can be cached for use in popFront, assuming it
> was faster). This would be a gradual transition.

alias String = typeof(std.uni.byGrapheme(immutable(char)[].init));

:-)

Well, OK, perhaps you could wrap this in a struct that allows extraction
of .raw, etc.. But basically this isn't hard to implement today. We
already have all of the tools necessary.


T

-- 
Dogs have owners ... cats have staff. -- Krista Casada


Re: Github names & avatars

2016-05-13 Thread Xinok via Digitalmars-d

On Friday, 13 May 2016 at 17:02:20 UTC, Walter Bright wrote:
I'll ask again that the active Github users use their own name, 
and add to that if you could have a selfie as your github image.


It avoids when people who post as "Fred" on the newsgroup 
submit PRs as "HorseWrangler" and get annoyed when I don't 
realize they are the same person, and then I overlook them at 
the conference because I have no idea what they look like.


In today's surveillance state, the government already knows 
your name and what you look like, so being anonymous on github 
is a bit pointless, as if anyone cares that you are interested 
in D. I can understand if you're a celebrity or want nobody to 
know you're a dog, but that doesn't apply to most of us.


https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_you%27re_a_dog


If people are annoyed or offended because you don't recognize 
them in person or under some other name/alias, they need to get 
over themselves. That's just childish.


As for asking us to use our true identities online, there's very 
good reason NOT to do so. It's not just about hiding from our 
"big brother" the government/NSA/GCHQ. There are companies which 
specialize in doing background checks on potential hires and 
they'll dig up every little secret they can find about you 
online. While you may think an association with D isn't a big 
deal, some people out there can be extremely bigoted and will 
judge you based on that alone. It goes both ways as well; if one 
deplorable individual happens to get a lot of public attention 
and he becomes associated with the D community (oh how the media 
loves to spin a story), it will reflect poorly on our community 
as a whole.


So no, let's not start using our real names just because you fail 
to recognize some people.


Re: Always false float comparisons

2016-05-13 Thread Walter Bright via Digitalmars-d

On 5/12/2016 10:12 PM, Manu via Digitalmars-d wrote:

No. Do not.
I've worked on systems where the compiler and the runtime don't share
floating point precisions before, and it was a nightmare.
One anecdote, the PS2 had a vector coprocessor; it ran reduced (24bit
iirc?) float precision, code compiled for it used 32bits in the
compiler... to make it worse, the CPU also ran 32bits. The result was,
literals/constants, or float data fed from the CPU didn't match data
calculated by the vector unit at runtime (ie, runtime computation of
the same calculation that may have occurred at compile time to produce
some constant didn't match). The result was severe cracking and
visible/shimmering seams between triangles as sub-pixel alignment
broke down.
We struggled with this for years. It was practically impossible to
solve, and mostly involved workarounds.


I understand there are some cases where this is needed, I've proposed intrinsics 
for that.




I really just want D to use double throughout, like all the cpu's that
run code today. This 80bit real thing (only on x86 cpu's though!) is a
never ending pain.


It's 128 bits on other CPUs.



This sounds like designing specifically for my problem from above,
where the frontend is always different than the backend/runtime.
Please have the frontend behave such that it operates on the precise
datatype expressed by the type... the backend probably does this too,
and runtime certainly does; they all match.


Except this never happens anyway.


Re: Github names & avatars

2016-05-13 Thread Walter Bright via Digitalmars-d

On 5/13/2016 10:54 AM, marcpmichel wrote:

On Friday, 13 May 2016 at 17:02:20 UTC, Walter Bright wrote:

In today's surveillance state, the government already knows your name and what
you look like, so being anonymous on github is a bit pointless


Practically, you may be right, but philosophically it's a terrible argument !
(aka: don't care, nothing to hide )




I know, but I find it hard to see how github PRs contain personal information.


Re: Always false float comparisons

2016-05-13 Thread Walter Bright via Digitalmars-d

On 5/12/2016 8:18 PM, Jack Stouffer wrote:

And be 20x slower than hardware floats. Is it really worth it?


I seriously doubt the slowdown would be measurable, as the number of float ops 
the compiler performs is insignificant.


Re: Github names & avatars

2016-05-13 Thread Walter Bright via Digitalmars-d

On 5/13/2016 10:18 AM, Steven Schveighoffer wrote:

On 5/13/16 1:02 PM, Walter Bright wrote:

I'll ask again that the active Github users use their own name, and add
to that if you could have a selfie as your github image.


Sorry, this isn't going to happen :) @schveiguy is much better than
@StevenSchveighoffer. Some of us are not so short-name-blessed. I actually don't
mind if people call me schveiguy!


At least there is a connection between the two names.


Yes, I know some of you are very attached to your github handles. I'm not 
demanding anyone
change, I just suggest it is to their advantage to. There are 123 contributors 
just to DMD,

my brain is not going to remember everyone's handle vs name vs face.

  https://github.com/dlang/dmd/graphs/contributors

Note the paucity of names and faces.


It avoids when people who post as "Fred" on the newsgroup submit PRs as
"HorseWrangler" and get annoyed when I don't realize they are the same
person, and then I overlook them at the conference because I have no
idea what they look like.


Please don't make me learn @dicebot's real name :)


Even the Sociomantic people at Dconf tended to call him Dicebot, so no worries 
there!




Re: Github names & avatars

2016-05-13 Thread marcpmichel via Digitalmars-d

On Friday, 13 May 2016 at 17:02:20 UTC, Walter Bright wrote:
In today's surveillance state, the government already knows 
your name and what you look like, so being anonymous on github 
is a bit pointless


Practically, you may be right, but philosophically it's a 
terrible argument ! (aka: don't care, nothing to hide )





Re: Github names & avatars

2016-05-13 Thread Walter Bright via Digitalmars-d

On 5/13/2016 10:19 AM, Adam D. Ruppe wrote:

It is nice to have a consistent pseudonym for matching up forum posts with irc
with github etc., but let's not make this a requirement.


It's a suggestion, not a requirement. I respect that some people have good 
reasons for anonymity.


Re: Github names & avatars

2016-05-13 Thread Adam D. Ruppe via Digitalmars-d

On Friday, 13 May 2016 at 17:02:20 UTC, Walter Bright wrote:
In today's surveillance state, the government already knows 
your name and what you look like, so being anonymous on github 
is a bit pointless, as if anyone cares that you are interested 
in D. I can understand if you're a celebrity or want nobody to 
know you're a dog, but that doesn't apply to most of us.


Actually, given the blatant misogyny frequently on display on 
this forum, about 51% of the world's population - literally most 
of us - have a perfectly understandable reason to maintain some 
level of anonymity in this community.


It is nice to have a consistent pseudonym for matching up forum 
posts with irc with github etc., but let's not make this a 
requirement.


Re: Github names & avatars

2016-05-13 Thread Steven Schveighoffer via Digitalmars-d

On 5/13/16 1:02 PM, Walter Bright wrote:

I'll ask again that the active Github users use their own name, and add
to that if you could have a selfie as your github image.


Sorry, this isn't going to happen :) @schveiguy is much better than 
@StevenSchveighoffer. Some of us are not so short-name-blessed. I 
actually don't mind if people call me schveiguy!


In fact, I have a counter-proposal. Instead of putting people's real 
names on their dconf name tags, let's just have their github handles :P.



It avoids when people who post as "Fred" on the newsgroup submit PRs as
"HorseWrangler" and get annoyed when I don't realize they are the same
person, and then I overlook them at the conference because I have no
idea what they look like.


Please don't make me learn @dicebot's real name :)

-Steve


Github names & avatars

2016-05-13 Thread Walter Bright via Digitalmars-d
I'll ask again that the active Github users use their own name, and add to that 
if you could have a selfie as your github image.


It avoids when people who post as "Fred" on the newsgroup submit PRs as 
"HorseWrangler" and get annoyed when I don't realize they are the same person, 
and then I overlook them at the conference because I have no idea what they look 
like.


In today's surveillance state, the government already knows your name and what 
you look like, so being anonymous on github is a bit pointless, as if anyone 
cares that you are interested in D. I can understand if you're a celebrity or 
want nobody to know you're a dog, but that doesn't apply to most of us.


https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_you%27re_a_dog


Re: Reproducible builds of D compilers

2016-05-13 Thread Pjotr Prins via Digitalmars-d

On Saturday, 7 May 2016 at 17:56:07 UTC, Johan Engelen wrote:
On Saturday, 7 May 2016 at 16:22:34 UTC, Vladimir Panteleev 
wrote:


https://blog.thecybershadow.net/2015/05/05/is-d-slim-yet/


Thanks for repeating the link to that blog article. I was 
reminded of it at DConf. Would be great if results from GDC and 
LDC could be added to the graphs, plus more tests!


Yes, nice read!




Re: Version block "conditions" with logical operators

2016-05-13 Thread Walter Bright via Digitalmars-d

On 5/13/2016 1:57 AM, Joakim wrote:

I'm trying, but Daniel seems against it, care to chip in?

https://github.com/dlang/dmd/pull/5772

Specifically, do you want the changes in that PR?  If so, do you prefer the use
of TARGET_POSIX as a runtime variable or listing each TARGET_OS separately?


I know there's some controversy in that thread, I guess I need to check in.


Re: The Case Against Autodecode

2016-05-13 Thread Steven Schveighoffer via Digitalmars-d

On 5/12/16 4:15 PM, Walter Bright wrote:


10. Autodecoded arrays cannot be RandomAccessRanges, losing a key
benefit of being arrays in the first place.


I'll repeat what I said in the other thread.

The problem isn't auto-decoding. The problem is hijacking the char[] and 
wchar[] (and variants) array type to mean autodecoding non-arrays.


If you think this code makes sense, then my definition of sane varies 
slightly from yours:


static assert(!hasLength!R && is(typeof(R.init.length)));
static assert(!is(ElementType!R == R.init[0]));
static assert(!isRandomAccessRange!R && is(typeof(R.init[0])) && 
is(typeof(R.init[0 .. $])));


I think D would be fine if string meant some auto-decoding struct with 
an immutable(char)[] array backing. I can accept and work with that. I 
can transform that into a char[] that makes sense if I have no use for 
auto-decoding. As of today, I have to use byCodePoint, or 
.representation, etc. and it's very unwieldy.


If I ran D, that's what I would do.

-Steve


Re: The Case Against Autodecode

2016-05-13 Thread Chris via Digitalmars-d

On Friday, 13 May 2016 at 14:06:28 UTC, Vladimir Panteleev wrote:

On Friday, 13 May 2016 at 13:41:30 UTC, Chris wrote:
PS Why does do I get a "StopForumSpam error" every time I post 
today? Has anyone else experienced the same problem:


"StopForumSpam error: Socket error: Lookup error: getaddrinfo 
error: Name or service not known. Please solve a CAPTCHA to 
continue."


https://twitter.com/StopForumSpam


I don't understand. Does that mean we have to solve CAPTCHAs 
every time we post? Annoying CAPTCHAs at that.


Re: The Case Against Autodecode

2016-05-13 Thread Vladimir Panteleev via Digitalmars-d

On Friday, 13 May 2016 at 13:41:30 UTC, Chris wrote:
PS Why does do I get a "StopForumSpam error" every time I post 
today? Has anyone else experienced the same problem:


"StopForumSpam error: Socket error: Lookup error: getaddrinfo 
error: Name or service not known. Please solve a CAPTCHA to 
continue."


https://twitter.com/StopForumSpam


Re: Always false float comparisons

2016-05-13 Thread Iain Buclaw via Digitalmars-d
On 13 May 2016 at 07:12, Manu via Digitalmars-d
 wrote:
> On 13 May 2016 at 11:03, Walter Bright via Digitalmars-d
>  wrote:
>> On 5/12/2016 4:32 PM, Marco Leise wrote:
>>>
>>> - Unless CTFE uses soft-float implementation, depending on
>>>   compiler and flags used to compile a D compiler, resulting
>>>   executable produces different CTFE floating-point results
>>
>>
>> I've actually been thinking of writing a 128 bit float emulator, and then
>> using that in the compiler internals to do all FP computation with.
>
> No. Do not.
> I've worked on systems where the compiler and the runtime don't share
> floating point precisions before, and it was a nightmare.

I have some bad news for you about CTFE then. This already happens in
DMD even though float is not emulated.  :-o


Re: The Case Against Autodecode

2016-05-13 Thread Chris via Digitalmars-d

On Friday, 13 May 2016 at 13:17:44 UTC, Walter Bright wrote:

On 5/13/2016 2:12 AM, Chris wrote:
If autodecode is killed, could we have a test version asap? 
I'd be willing to
test my programs with autodecode turned off and see what 
happens. Others should
do likewise and we could come up with a transition strategy 
based on what happened.


You can avoid autodecode by using .byChar


Hm. It would be difficult to make sure that my whole code base 
doesn't do something, somewhere that doesn't trigger auto decode.


PS Why does do I get a "StopForumSpam error" every time I post 
today? Has anyone else experienced the same problem:


"StopForumSpam error: Socket error: Lookup error: getaddrinfo 
error: Name or service not known. Please solve a CAPTCHA to 
continue."


Re: The Case Against Autodecode

2016-05-13 Thread Walter Bright via Digitalmars-d

On 5/13/2016 3:43 AM, Marc Schütz wrote:

On Thursday, 12 May 2016 at 20:15:45 UTC, Walter Bright wrote:

7. Autodecode cannot be used with unicode path/filenames, because it is legal
(at least on Linux) to have invalid UTF-8 as filenames. It turns out in the
wild that pure Unicode is not universal - there's lots of dirty Unicode that
should remain unmolested, and autocode does not play with that.


This just means that filenames mustn't be represented as strings; it's unrelated
to auto decoding.


It means much more than that, filenames are just an example. I recently fixed 
MicroEmacs (my text editor) to assume the source is UTF-8, and display Unicode 
characters. But it still needs to work with dirty UTF-8 without throwing 
exceptions, modifying the text in-place, or other tantrums.


Re: The Case Against Autodecode

2016-05-13 Thread Walter Bright via Digitalmars-d

On 5/12/2016 11:50 PM, Bill Hicks wrote:

And I get called a troll and
other names when I list half a dozen things wrong with D, my posts get
removed/censored, etc, all because I try to inform people not to waste time with
D because it's a broken and failed language.


Posts that engage in personal attacks and bring up personal issues about other 
forum members get removed.


You're welcome to post here in a reasonably professional manner.



Re: The Case Against Autodecode

2016-05-13 Thread Walter Bright via Digitalmars-d

On 5/13/2016 2:12 AM, Chris wrote:

If autodecode is killed, could we have a test version asap? I'd be willing to
test my programs with autodecode turned off and see what happens. Others should
do likewise and we could come up with a transition strategy based on what 
happened.


You can avoid autodecode by using .byChar


Re: The Case Against Autodecode

2016-05-13 Thread Kagamin via Digitalmars-d

On Friday, 13 May 2016 at 10:38:09 UTC, Jonathan M Davis wrote:
IIRC, Andrei talked in TDPL about how Java's choice to go with 
UTF-16 was worse than the choice to go with UTF-8, because it 
was correct in many more cases


UTF-16 was a migration from UCS-2, and UCS-2 was superior at the 
time.


Re: Killing the comma operator

2016-05-13 Thread Nick Treleaven via Digitalmars-d

On Thursday, 12 May 2016 at 02:51:33 UTC, Lionello Lunesu wrote:
I'm trying to think of a case where changing a single value 
into a tuple with 2 (or more) values would silently change the 
behavior, but I can't think of any. Seems to me it would always 
cause an error, iff the result of the comma operator gets used.


int x,y;
auto f() {return (x=4,y);}
...
auto z = f();
static if (!is(typeof(z) == int)
  voteForTrump();

;-)

In practice, this is more plausible with function overloading - 
i.e. z.overload() calling a different function. If the comma 
operator returns void, the `auto z` line and f().overload() both 
fail.


Re: The Case Against Autodecode

2016-05-13 Thread Nick Treleaven via Digitalmars-d

On Friday, 13 May 2016 at 00:47:04 UTC, Jack Stouffer wrote:
If you're serious about removing auto-decoding, which I think 
you and others have shown has merits, you have to the THE 
SIMPLEST migration path ever, or you will kill D. I'm talking a 
simple press of a button.


char[] is always going to be unsafe for UTF-8. I don't think we 
can remove it or auto-decoding, only discourage use of it. We 
need a String struct IMO, without length or indexing. Its front 
can do autodecoding, and it has a ubyte[] raw() property too. 
(Possibly the byte length of front can be cached for use in 
popFront, assuming it was faster). This would be a gradual 
transition.


Re: Command line parsing

2016-05-13 Thread Russel Winder via Digitalmars-d
On Thu, 2016-05-12 at 18:25 +, Jesse Phillips via Digitalmars-d
wrote:
[…]
> unknown flags harder and displaying help challenging. So I'd like 
> to see getopt merge with another getopt

getopt is a 1970s C solution to the problem of command line parsing.
Most programming languages have moved on from getopt and created
language-idiomatic solutions to the problem. Indeed there are other,
better solution in C now as well.

D should have one (or more maybe) D idiomatic command line processing
libraries *NOT* called getopt.
 
-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder

signature.asc
Description: This is a digitally signed message part


Re: The Case Against Autodecode

2016-05-13 Thread Marc Schütz via Digitalmars-d

On Friday, 13 May 2016 at 10:38:09 UTC, Jonathan M Davis wrote:
Ideally, algorithms would be Unicode aware as appropriate, but 
the default would be to operate on code units with wrappers to 
handle decoding by code point or grapheme. Then it's easy to 
write fast code while still allowing for full correctness. 
Granted, it's not necessarily easy to get correct code that 
way, but anyone who wants fully correctness without caring 
about efficiency can just use ranges of graphemes. Ranges of 
code points are rare regardless.


char[], wchar[] etc. can simply be made non-ranges, so that the 
user has to choose between .byCodePoint, .byCodeUnit (or 
.representation as it already exists), .byGrapheme, or even 
higher-level units like .byLine or .byWord. Ranges of char, wchar 
however stay as they are today. That way it's harder to 
accidentally get it wrong.




Based on what I've seen in previous conversations on 
auto-decoding over the past few years (be it in the newsgroup, 
on github, or at dconf), most of the core devs think that 
auto-decoding was a major blunder that we continue to pay for. 
But unfortunately, even if we all agree that it was a huge 
mistake and want to fix it, the question remains of how to do 
that without breaking tons of code - though since AFAIK, Andrei 
is still in favor of auto-decoding, we'd have a hard time going 
forward with plans to get rid of it even if we had come up with 
a good way of doing so. But I would love it if we could get rid 
of auto-decoding and clean up string handling in D.


There is a simple deprecation path that's already been suggested. 
`isInputRange` and friends can output a helpful deprecation 
warning when they're called with a range that currently triggers 
auto-decoding.


Re: The Case Against Autodecode

2016-05-13 Thread Marc Schütz via Digitalmars-d

On Thursday, 12 May 2016 at 23:16:23 UTC, H. S. Teoh wrote:
Therefore, autodecoding actually only produces intuitively 
correct results when your string has a 1-to-1 correspondence 
between grapheme and code point. In general, this is only true 
for a small subset of languages, mainly a few common European 
languages and a handful of others.  It doesn't work for Korean, 
and doesn't work for any language that uses combining 
diacritics or other modifiers.  You need byGrapheme to have the 
correct results.


In fact, even most European languages are affected if NFD 
normalization is used, which is the default on MacOS X.


And this is actually the main problem with it: It was introduced 
to make unicode string handling correct. Well, it doesn't, 
therefore it has no justification.


Re: The Case Against Autodecode

2016-05-13 Thread Chris via Digitalmars-d

On Friday, 13 May 2016 at 10:38:09 UTC, Jonathan M Davis wrote:


Based on what I've seen in previous conversations on 
auto-decoding over the past few years (be it in the newsgroup, 
on github, or at dconf), most of the core devs think that 
auto-decoding was a major blunder that we continue to pay for. 
But unfortunately, even if we all agree that it was a huge 
mistake and want to fix it, the question remains of how to do 
that without breaking tons of code - though since AFAIK, Andrei 
is still in favor of auto-decoding, we'd have a hard time going 
forward with plans to get rid of it even if we had come up with 
a good way of doing so. But I would love it if we could get rid 
of auto-decoding and clean up string handling in D.


- Jonathan M Davis


Why not just try it in a separate test release? Only then can we 
know to what extent it actually breaks code, and what remedies we 
could come up with.




Re: The Case Against Autodecode

2016-05-13 Thread Marc Schütz via Digitalmars-d

On Thursday, 12 May 2016 at 20:15:45 UTC, Walter Bright wrote:
7. Autodecode cannot be used with unicode path/filenames, 
because it is legal (at least on Linux) to have invalid UTF-8 
as filenames. It turns out in the wild that pure Unicode is not 
universal - there's lots of dirty Unicode that should remain 
unmolested, and autocode does not play with that.


This just means that filenames mustn't be represented as strings; 
it's unrelated to auto decoding.


Re: synchronized with multiple arguments

2016-05-13 Thread ZombineDev via Digitalmars-d

On Friday, 13 May 2016 at 09:17:06 UTC, Andrei Alexandrescu wrote:
A reader reminded me (thanks!) that in TDPL synchronized with 
multiple argument does the right thing - locks objects in 
increasing order of address.


So now to everyone's unpleasant surprise, the sample code in 
TDPL compiles and runs, it just has difficult to detect 
problems.


So regardless of the discussion of the comma operator, 
synchronized with multiple arguments should just work.


+1


Is synchronized being lowered to some function calls?


Here's the relevant code:
https://github.com/dlang/dmd/blob/master/src/statement.d#L4974

IIUC, the code assumes that there is a single object that needs 
to be locked. Which is definitely wrong.


BTW, should synchronized (obj) allow calling non-shared methods 
of obj inside the block?





Re: The Case Against Autodecode

2016-05-13 Thread Jonathan M Davis via Digitalmars-d
On Thursday, May 12, 2016 13:15:45 Walter Bright via Digitalmars-d wrote:
> On 5/12/2016 9:29 AM, Andrei Alexandrescu wrote:
>  > I am as unclear about the problems of autodecoding as I am about the
>  > necessity to remove curl. Whenever I ask I hear some arguments that work
>  > well emotionally but are scant on reason and engineering. Maybe it's
>  > time to rehash them? I just did so about curl, no solid argument seemed
>  > to come together. I'd be curious of a crisp list of grievances about
>  > autodecoding. -- Andrei
>
> Here are some that are not matters of opinion.
>
> 1. Ranges of characters do not autodecode, but arrays of characters do. This
> is a glaring inconsistency.
>
> 2. Every time one wants an algorithm to work with both strings and ranges,
> you wind up special casing the strings to defeat the autodecoding, or to
> decode the ranges. Having to constantly special case it makes for more
> special cases when plugging together components. These issues often escape
> detection when unittesting because it is convenient to unittest only with
> arrays.
>
> 3. Wrapping an array in a struct with an alias this to an array turns off
> autodecoding, another special case.
>
> 4. Autodecoding is slow and has no place in high speed string processing.
>
> 5. Very few algorithms require decoding.
>
> 6. Autodecoding has two choices when encountering invalid code units - throw
> or produce an error dchar. Currently, it throws, meaning no algorithms
> using autodecode can be made nothrow.
>
> 7. Autodecode cannot be used with unicode path/filenames, because it is
> legal (at least on Linux) to have invalid UTF-8 as filenames. It turns out
> in the wild that pure Unicode is not universal - there's lots of dirty
> Unicode that should remain unmolested, and autocode does not play with
> that.
>
> 8. In my work with UTF-8 streams, dealing with autodecode has caused me
> considerably extra work every time. A convenient timesaver it ain't.
>
> 9. Autodecode cannot be turned off, i.e. it isn't practical to avoid
> importing std.array one way or another, and then autodecode is there.
>
> 10. Autodecoded arrays cannot be RandomAccessRanges, losing a key benefit of
> being arrays in the first place.
>
> 11. Indexing an array produces different results than autodecoding, another
> glaring special case.

It also results in constantly special-casing algorithms for narrow strings
in order to avoid auto-decoding. Phobos does this all over the place. We
have a ridiculous amount of code in Phobos just to avoid auto-decoding, and
anyone who wants high performance will have to do the same.

And it's not like auto-decoding is even correct. It would be one thing if
auto-decoding were fully correct but slow, but to be fully correct, it would
need to operate at the grapheme level, not the code point level. So, by
default, we get slower code without actually getting fully correct code.

So, we're neither fast nor correct. We _are_ correct in more cases than we'd
be if we simply acted like ASCII was all there was, but what we end up with
is the illusion that we're correct when we're not. IIRC, Andrei talked in
TDPL about how Java's choice to go with UTF-16 was worse than the choice to
go with UTF-8, because it was correct in many more cases to operate on the
code unit level as if a code unit were a character, and it was therefore
harder to realize that what you were doing was wrong, whereas with UTF-8,
it's obvious very quickly. We currently have that same problem with
auto-decoding except that it's treating UTF-32 code units as if they were
full characters rather than treating UTF-16 code units as if they were full
characters.

Ideally, algorithms would be Unicode aware as appropriate, but the default
would be to operate on code units with wrappers to handle decoding by code
point or grapheme. Then it's easy to write fast code while still allowing
for full correctness. Granted, it's not necessarily easy to get correct code
that way, but anyone who wants fully correctness without caring about
efficiency can just use ranges of graphemes. Ranges of code points are rare
regardless.

Based on what I've seen in previous conversations on auto-decoding over the
past few years (be it in the newsgroup, on github, or at dconf), most of the
core devs think that auto-decoding was a major blunder that we continue to
pay for. But unfortunately, even if we all agree that it was a huge mistake
and want to fix it, the question remains of how to do that without breaking
tons of code - though since AFAIK, Andrei is still in favor of
auto-decoding, we'd have a hard time going forward with plans to get rid of
it even if we had come up with a good way of doing so. But I would love it
if we could get rid of auto-decoding and clean up string handling in D.

- Jonathan M Davis



Re: The Case Against Autodecode

2016-05-13 Thread Kagamin via Digitalmars-d

On Friday, 13 May 2016 at 06:50:49 UTC, Bill Hicks wrote:
not to waste time with D because it's a broken and failed 
language.


D is a better broken thing among all the broken things in this 
broken world, so it's to be expected to be preferred to spend 
time on.


Re: The backlash against scripting languages has begun

2016-05-13 Thread Chris via Digitalmars-d

On Friday, 13 May 2016 at 07:31:26 UTC, Joakim wrote:
He mentions Swift, Rust, and Go as his hopes at the end, too 
bad he doesn't include D:


https://medium.com/@deathdisco/today-i-accept-that-rails-is-yesterday-s-software-b5af35c9af39

He'd probably be happy with D, particularly given Walter's 
stance on the monkey-patching that guy now rues:


"Monkey-patching has, in Ruby, been popular and powerful. It 
has also turned out to be a disaster. It does not scale, and is 
not conducive to more than one person/team working on the code 
base."

http://forum.dlang.org/post/jsat48$ujt$1...@digitalmars.com

That blogger probably wishes he read that quote from Walter 
four years ago. ;)


"basing themselves on interpreted, slow languages that favoured 
‘easy to learn’ over ‘easy to maintain’."


Yep. Frustration kicks in sooner or later. I always tell people 
not to use scripting languages for bigger or real world projects.


Re: The Case Against Autodecode

2016-05-13 Thread Chris via Digitalmars-d

On Friday, 13 May 2016 at 06:50:49 UTC, Bill Hicks wrote:


Wow, that's eleven things wrong with just one tiny element of 
D, with the potential to cause problems, whether fixed or not.  
And I get called a troll and other names when I list half a 
dozen things wrong with D, my posts get removed/censored, etc, 
all because I try to inform people not to waste time with D 
because it's a broken and failed language.


*sigh*

Phobos, a piece of useless rock orbiting a dead planet ... the 
irony.


Is there any PL that doesn't have multiple issues? Look at Swift. 
They keep changing it, although it started out as _the_ big 
thing, because, you know, it's Apple. C#, Java, Go and of course 
the chronically ill C++. There is no such thing as the perfect 
PL, and as hardware is changing, PLs are outdated anyway and have 
to catch up. The question is not whether a language sucks or not, 
the question is which language sucks the least for the task at 
hand.


PS I wonder does Bill Hicks know you're using his name? But I 
guess he's lost interest in this planet and happily lives on Mars 
now.




synchronized with multiple arguments

2016-05-13 Thread Andrei Alexandrescu via Digitalmars-d
A reader reminded me (thanks!) that in TDPL synchronized with multiple 
argument does the right thing - locks objects in increasing order of 
address.


So now to everyone's unpleasant surprise, the sample code in TDPL 
compiles and runs, it just has difficult to detect problems.


So regardless of the discussion of the comma operator, synchronized with 
multiple arguments should just work.


Is synchronized being lowered to some function calls?


Andrei


Re: The Case Against Autodecode

2016-05-13 Thread Chris via Digitalmars-d

On Friday, 13 May 2016 at 01:00:54 UTC, Walter Bright wrote:

On 5/12/2016 5:47 PM, Jack Stouffer wrote:
D is much less popular now than was Python at the time, and 
Python 2 problems
were more straight forward than the auto-decoding problem.  
You'll need a very
clear migration path, years long deprecations, and automatic 
tools in order to
make the transition work, or else D's usage will be 
permanently damaged.


I agree, if it is possible at all.


I don't know to which extent my problems with string handling are 
related to autodecode. However, I had to write some utility 
functions to get around issues with code points, graphemes and 
the like. While it is not a huge issue in terms of programming 
time, it does slow down my program, because even simple 
operations may be referred to a utility function to make sure the 
result is correct (.length for example). But that might be an 
issue related to Unicode in general (or D's handling of it).


If autodecode is killed, could we have a test version asap? I'd 
be willing to test my programs with autodecode turned off and see 
what happens. Others should do likewise and we could come up with 
a transition strategy based on what happened.


Re: Version block "conditions" with logical operators

2016-05-13 Thread Joakim via Digitalmars-d

On Thursday, 12 May 2016 at 01:58:33 UTC, Walter Bright wrote:

On 5/11/2016 6:52 PM, Joakim wrote:
That example is misleading, as that was translated from C++ 
and the host half of

it was removed a couple months ago:

https://github.com/dlang/dmd/pull/5549/files

I'll submit a PR for the rest: I'm sick of this argument that 
"ddmd is using

static if, so why shouldn't I?"


Please do. That code is an abomination.


I'm trying, but Daniel seems against it, care to chip in?

https://github.com/dlang/dmd/pull/5772

Specifically, do you want the changes in that PR?  If so, do you 
prefer the use of TARGET_POSIX as a runtime variable or listing 
each TARGET_OS separately?


Re: Command line parsing

2016-05-13 Thread Andrei Alexandrescu via Digitalmars-d

On 5/12/16 8:21 PM, Nick Sabalausky wrote:

You may want to ask Sonke about his specific reasons and experiences
with that design.


Yes please! -- Andrei


Re: Casting Pointers?

2016-05-13 Thread Rene Zwanenburg via Digitalmars-d

On Thursday, 12 May 2016 at 22:23:38 UTC, Marco Leise wrote:
The pointer cast solution is specifically supported at CTFE, 
because /unions/ don't work there. :p


Well that's a problem ^^

I remember a discussion quite a while ago where Walter stated D 
should have strict aliasing rules, let me see if I can find it.. 
Ah here:


http://forum.dlang.org/post/jg3f21$1jqa$1...@digitalmars.com

On Saturday, 27 July 2013 at 06:58:04 UTC, Walter Bright wrote:
Although it isn't in the spec, D should be "strict aliasing". 
This is because:


1. it enables better code generation

2. there are ways, such as unions, to get the other aliasing 
that doesn't break strict aliasing


On Saturday, 27 July 2013 at 08:59:54 UTC, Walter Bright wrote:

On 7/27/2013 1:57 AM, David Nadlinger wrote:

We need to carefully formalize this then, and quickly.

[...]

I agree. Want to do an enhancement request on bugzilla for it?


Re: Always false float comparisons

2016-05-13 Thread Ola Fosheim Grøstad via Digitalmars-d

On Friday, 13 May 2016 at 05:12:14 UTC, Manu wrote:

No. Do not.
I've worked on systems where the compiler and the runtime don't 
share

floating point precisions before, and it was a nightmare.


Use reproducible cross platform IEEE754-2008 and use exact 
rational numbers. All other representations are just painful. 
Nothing wrong with supporting 16, 32, 64 and 128 bit, but stick 
to the reproducible standard. If people want "non-reproducible 
fast math", then they should specify it.




Re: The Case Against Autodecode

2016-05-13 Thread Ola Fosheim Grøstad via Digitalmars-d

On Friday, 13 May 2016 at 00:47:04 UTC, Jack Stouffer wrote:
D is much less popular now than was Python at the time, and 
Python 2 problems were more straight forward than the 
auto-decoding problem.  You'll need a very clear migration 
path, years long deprecations, and automatic tools in order to 
make the transition work, or else D's usage will be permanently 
damaged.


Python 2 is/was deployed at a much larger scale and with far more 
library dependencies, so I don't think it is comparable. It is 
easier for D to get away with breaking changes.


I am still using Python 2.7 exclusively, but now I use:
from __future__ import division, absolute_import, with_statement, 
unicode_literals


D can do something similar.

C++ is using a comparable solution. Use switches to turn on 
different compatibility levels.




The backlash against scripting languages has begun

2016-05-13 Thread Joakim via Digitalmars-d
He mentions Swift, Rust, and Go as his hopes at the end, too bad 
he doesn't include D:


https://medium.com/@deathdisco/today-i-accept-that-rails-is-yesterday-s-software-b5af35c9af39

He'd probably be happy with D, particularly given Walter's stance 
on the monkey-patching that guy now rues:


"Monkey-patching has, in Ruby, been popular and powerful. It has 
also turned out to be a disaster. It does not scale, and is not 
conducive to more than one person/team working on the code base."

http://forum.dlang.org/post/jsat48$ujt$1...@digitalmars.com

That blogger probably wishes he read that quote from Walter four 
years ago. ;)


Re: The Case Against Autodecode

2016-05-13 Thread poliklosio via Digitalmars-d

On Friday, 13 May 2016 at 06:50:49 UTC, Bill Hicks wrote:

On Thursday, 12 May 2016 at 20:15:45 UTC, Walter Bright wrote:

(...)
Wow, that's eleven things wrong with just one tiny element of 
D, with the potential to cause problems, whether fixed or not.  
And I get called a troll and other names when I list half a 
dozen things wrong with D, my posts get removed/censored, etc, 
all because I try to inform people not to waste time with D 
because it's a broken and failed language.


*sigh*

Phobos, a piece of useless rock orbiting a dead planet ... the 
irony.


You get banned because there is a difference between torpedoing a 
project and having constructive criticism.
Also, you are missing the point by claiming that a technical 
problem is sure to kill D. Note that very successful languages 
like C++, python and so on also have undergone heated discussions 
about various features, and often live design mistakes for many 
years. The real reason why languages are successful is what they 
enable, not how many quirks they have.

Quirks are why they get replaced by others 20 years later. :)


Re: The Case Against Autodecode

2016-05-13 Thread Ethan Watson via Digitalmars-d

On Friday, 13 May 2016 at 06:50:49 UTC, Bill Hicks wrote:

*rant*


Actually, chap, it's the attitude that's the turn-off in your 
post there. Listing problems in order to improve them, and 
listing problems to convince people something is a waste of time 
are incompatible mindsets around here.


Re: The end of curl (in phobos)

2016-05-13 Thread Johannes Pfau via Digitalmars-d
Am Sun, 8 May 2016 11:33:07 +0300
schrieb Andrei Alexandrescu :

> On 5/8/16 11:05 AM, Jonathan M Davis via Digitalmars-d wrote:
> > On Sunday, May 08, 2016 02:44:48 Adam D. Ruppe via Digitalmars-d
> > wrote:  
> >> On Saturday, 7 May 2016 at 20:50:53 UTC, Jonas Drewsen wrote:  
> >>> But std.net.curl supports not just HTTP but also FTP etc. so i
> >>> guess that won't suffice.  
> >>
> >> We can always implement ftp too, it isn't that complicated of a
> >> protocol.
> >>
> >> Though, I suspect its users are a tiny minority and they might
> >> not mind depending on a separate curl package.  
> >
> > An alternative would be to move std.net.curl into a dub package.  
> 
> That would still be a breaking change, is that correct?
> 
> I'm unclear on what the reasons are for removing libcurl so I'd love
> to see them stated clearly. Walter's argumentation was vague - code
> that we don't control etc. There have been past reports of issues
> with libcurl on windows, have those not been permanently solved?
> 
> I even see a plus: dealing with libcurl is a good exercise in eating
> our dogfood regarding "interfacing with C libraries is trivial"
> stance. Having to deal with it is a reflection of what other projects
> have to do on an ongoing basis.
> 
> 
> Andrei
> 

The curl problems are more or less solved now, but it has caused
quite some trouble:

As long as we were statically linking curl:
 * We can't use curl when producing cross compilers for GDC as the
   minimal builds used by crosstool do not include curl. They do not
   even include zlib, we're just lucky that zlib is in GCC and we
   compile it statically into druntime. OTOH I'm not sure if we can get
   conflicts between our statically compiled zlib and libraries which
   link against zlib.
 * For static libraries, we don't need curl at link time, but for
   dynamic libraries we do need it.
 * There was the library versioning issue which made DMD builds
   unusable on some distributions.
 * http://bugzilla.gdcproject.org/show_bug.cgi?id=202 Even programs not
   using libcurl will sometimes require linking curl (This is because
   common templates such as std.conv.to might be instatiated in curl,
   so curl.o is pulled in, which depends on libcurl)
 * Library order when linking is important nowadays, so you need a way
   to specify -lcurl in the correct location relative to -lphobos

Still open issues:
 * Even when dynamically loading curl, it introduces a new dependency:
   libdl for dynamic loading. This is not an issue for shared
   libraries, but the list of libraries which need to be hard coded when
   linking a static libphobos is already quite long:
   -lc -lrt -lm -ldl -lz -lstdc++ -luuid -lws2_32
   In fact GDC doesn't link some of these yet and Iain doesn't want to
   add more special cases to our linking code
(https://github.com/D-Programming-GDC/GDC/pull/182
https://github.com/D-Programming-GDC/GDC/pull/181).


Additionally the complete API, integration with D features and
performance is not really up to phobos standards. This is because of
libcurl API limitations though, so there's nothing we can do about it.
As long as we don't have a D replacement though, it's still the best
HTTP client available...