Re: What are the worst parts of D?

2014-10-08 Thread Paolo Invernizzi via Digitalmars-d
On Wednesday, 8 October 2014 at 20:35:05 UTC, Andrei Alexandrescu 
wrote:

On 10/8/14, 4:17 AM, Don wrote:
On Monday, 6 October 2014 at 19:07:40 UTC, Andrei Alexandrescu 
wrote:


And personally, I doubt that many companies would use D, even 
if with
perfect C++ interop, if the toolchain stayed at the current 
level.


That speculation turns out to not be true for Facebook. My turn 
to speculate - many other companies have existing codebases in 
C++, so Sociomantic is "special".


Well, when IMHO, when discussing 'strategies', pretty everything 
it's a speculation...
C++ interlope can also be attrattive when you need to start a new 
project, a you need C++ libs.


But, the point it's that, again, IMHO, you tend to conflate 
Facebook need with D need (I know I'll receive pain back for this 
;-).


Sociomantic is not so special at all, about not having a previous 
C++ codebase: I personally know plenty of cases like that.


But if D don't stop thinking about "new feature" and never 
terminate the previous plans, well, my speculations is that I 
donno about future adopters, but for sure it's scouring actual 
adopters; and the for sure it's based on what we feel here in SR 
Labs company.


That's of course good, but the reality is we're in a 
complicated trade-off space with "important", "urgent", "easy 
to do", "return on investment", "resource allocation" as axes. 
An example of the latter - ideally we'd put Walter on the more 
difficult tasks and others on the easy wins. Walter working on 
improving documentation might not be the best use of his time, 
although better documentation is an easy win.


Well, I've read your and Walter comment on the multiple alias 
this PR, so good: but the point that it was the community that 
pushed both of you on that track, it's systematic about an 
attitude.


And now, shields up, Ms Sulu!
--
/Paolo


Re: What are the worst parts of D?

2014-10-08 Thread Paolo Invernizzi via Digitalmars-d

On Thursday, 9 October 2014 at 00:30:53 UTC, Walter Bright wrote:

On 9/25/2014 4:08 AM, Don wrote:


And adding a @ in front of pure, nothrow.


https://issues.dlang.org/show_bug.cgi?id=13388

It has generated considerable discussion.


Please break the language, now.

---
/Paolo





Re: What are the worst parts of D?

2014-10-08 Thread Walter Bright via Digitalmars-d

On 9/25/2014 4:08 AM, Don wrote:

I'd also like to see us getting rid of those warts like assert(float.nan) being
true.


https://issues.dlang.org/show_bug.cgi?id=13489

It has some serious issues with it - I suspect it'll cause uglier problems than 
it fixes.




And adding a @ in front of pure, nothrow.


https://issues.dlang.org/show_bug.cgi?id=13388

It has generated considerable discussion.


Re: Parameterized unit testing and benchmarking of phobos

2014-10-08 Thread Andrei Alexandrescu via Digitalmars-d

On 10/8/14, 4:44 PM, Robert burner Schadek wrote:

On Wednesday, 8 October 2014 at 23:31:59 UTC, Andrei Alexandrescu wrote:

On 10/8/14, 2:37 PM, Robert burner Schadek wrote:

version(unittest_benchmark) {
unittest {


No need for the outer braces :o). -- Andrei


I just love my braces.


If you love your braces you gotta love your indentation. They come 
together... -- Andrei




Re: Parameterized unit testing and benchmarking of phobos

2014-10-08 Thread Robert burner Schadek via Digitalmars-d
On Wednesday, 8 October 2014 at 23:31:59 UTC, Andrei Alexandrescu 
wrote:

On 10/8/14, 2:37 PM, Robert burner Schadek wrote:

version(unittest_benchmark) {
unittest {


No need for the outer braces :o). -- Andrei


I just love my braces.
If that's gone be the most negative comment, I will have a PR 
ready before next week.


Re: Parameterized unit testing and benchmarking of phobos

2014-10-08 Thread Andrei Alexandrescu via Digitalmars-d

On 10/8/14, 2:37 PM, Robert burner Schadek wrote:

version(unittest_benchmark) {
unittest {


No need for the outer braces :o). -- Andrei


compile time configurations

2014-10-08 Thread Freddy via Digitalmars-d

I recently thought of the idea of using string imports for
compile time configuration.
Something like this
---
import std.stdio;
import std.conv;
import std.range;
import std.algorithm;

string getVar(string fname,string var)(){
foreach(property;import(fname).splitter('\n')){
auto varname=property[0..property.countUntil("=")];
if(var==varname){
return property[property.countUntil("=")+1..$];
}
}
assert(0, "unable to find property");
}

enum dimisions=getVar!("shapes","dimisions").to!uint;

void main()
{
writeln("There are ",dimisions," dimisions");
}
---
Where you add your config dir(that contains "shapes") file to
dub's  stringImportPaths".
How do you guys feel about this? Should we use something like
json for config files?


Re: What are the worst parts of D?

2014-10-08 Thread Manu via Digitalmars-d
On 09/10/2014 2:40 am, "Joakim via Digitalmars-d" <
digitalmars-d@puremagic.com> wrote:
>
> On Wednesday, 8 October 2014 at 13:55:11 UTC, Manu via Digitalmars-d
wrote:
>>
>> On 08/10/2014 9:20 pm, "Don via Digitalmars-d"
>>>
>>> So what do we care about? Mainly, we care about improving the core
>>
>> product.
>>>
>>>
>>> In general I think that in D we have always suffered from spreading
>>
>> ourselves too thin. We've always had a bunch of cool new features that
>> don't actually work properly. Always, the focus shifts to something else,
>> before the previous feature was finished.
>>>
>>>
>>> And personally, I doubt that many companies would use D, even if with
>>
>> perfect C++ interop, if the toolchain stayed at the current level.
>>
>> As someone who previously represented a business interest, I couldn't
agree
>> more.
>> Aside from my random frustrated outbursts on a very small set of language
>> issues, the main thing I've been banging on from day 1 is the tooling.
Much
>> has improved, but it's still a long way from 'good'.
>>
>> Debugging, ldc (for windows), and editor integrations (auto complete,
>> navigation, refactoring tools) are my impersonal (and hopefully
>> non-controversial) short list. They trump everything else I've ever
>> complained about.
>> The debugging experience is the worst of any language I've used since the
>> 90's, and I would make that top priority.
>
>
> While it would be great if there were a company devoted to such D
tooling, it doesn't exist right now.  It is completely unrealistic to
expect a D community of unpaid volunteers to work on these features for
your paid projects.  If anybody in the community cared as much about these
features as you, they'd have done it already.
>
> I suggest you two open bugzilla issues for all these specific bugs or
enhancements and put up bounties for their development.  If you're not
willing to do that, expecting the community to do work for you for free is
just whining that is easily ignored.

We're just talking about what we think would best drive adoption.
Businesses aren't likely to adopt a language with the understanding that
they need to write it's tooling. Debugging, code competition and
refactoring are all expert tasks that probably require compiler involvement.

I know it's easy to say that businesses with budget should contribute more.
But it's a tough proposition. Businesses will look to change language if it
saves them time and money. If it's going to cost them money, and the state
of tooling is likely to cost them time, then it's not a strong proposition.
It's a chicken and egg problem. I'm sure business will be happy to
contribute financially when it's a risk free investment; ie, when it's
demonstrated that that stuff works for them.


Re: On Phobos GC hunt

2014-10-08 Thread ketmar via Digitalmars-d
On Wed, 08 Oct 2014 23:40:01 +0200
Timon Gehr via Digitalmars-d  wrote:

> This is probably a regression somewhere after 2.060, because with
> 2.060 I get
> 
> Error: variable __ctfe cannot be read at compile time
> Error: expression __ctfe is not constant or does not evaluate to a
> bool
> 
> as I'd expect.
i remember now that i was copypasting toHash() from druntime some time
ago and changed "if (__ctfe)" to "static if (__ctfe)" in process. it
compiles and works fine, and i don't even noticed what i did until i
tried to change non-ctfe part of toHash() and found that my changes had
no effect at all. and then i discovered that "static".

this was 2.066 or 2.067-git.

and now i can clearly say that "static if (__ctfe)" leaving only ctfe
part.

that was somewhat confusing, as i was pretty sure that "if (__ctfe)"
*must* be used with "static".


signature.asc
Description: PGP signature


Re: On Phobos GC hunt

2014-10-08 Thread Timon Gehr via Digitalmars-d

On 10/08/2014 10:25 PM, ketmar via Digitalmars-d wrote:

On Wed, 8 Oct 2014 23:20:13 +0300
ketmar via Digitalmars-d  wrote:

p.s. or vice versa: "static if (__ctfe)" is always true, to non-ctfe
code will be removed. sorry, i can't really remember what is true, but
anyway, it works by removeing one of the branches altogether.



This is probably a regression somewhere after 2.060, because with 2.060 
I get


Error: variable __ctfe cannot be read at compile time
Error: expression __ctfe is not constant or does not evaluate to a bool

as I'd expect.


Parameterized unit testing and benchmarking of phobos

2014-10-08 Thread Robert burner Schadek via Digitalmars-d
Lately, I find myself wondering, if I should add parameterized 
unit tests to std.string, because the last few bugs I fixed where 
not caught by tests, as the test-data was not good enough. I know 
random data is not perfect either, but it would be good addition 
IMO.


Additionally, I thought these unit tests could be used to 
benchmark the performance of the functions and the compilers. 
Thus allowing continues monitoring of performance changes.


I'm thinking of something like:

version(unittest_benchmark) {
unittest {
auto ben = Benchmark("bench_result_file", numberOfRuns);
auto values = ValueGen!(StringGen(0, 12), IntGen(2, 24))(ben);

foreach(string rStr, int rInt; values) {
auto rslt = functionToTest(rStr, rInt);
// some asserts
}
}
}

The bench_result_file would be a csv with e.g. date,runtime,...

ideas, suggestions?


Re: What are the worst parts of D?

2014-10-08 Thread Walter Bright via Digitalmars-d

On 10/6/2014 11:13 AM, Dicebot wrote:

Especially because
you have stated that previous proposal (range-fication) which did fix the issue
_for me_ is not on the table anymore.


I think it's more stalled because of the setExtension controversy.



How about someone starts paying attention to what Don posts? That could be an
incredible start.


I don't always agree with Don, but he's almost always right and his last post 
was almost entirely implemented.




Re: What are the worst parts of D?

2014-10-08 Thread Andrei Alexandrescu via Digitalmars-d

On 10/8/14, 4:17 AM, Don wrote:

On Monday, 6 October 2014 at 19:07:40 UTC, Andrei Alexandrescu wrote:

More particulars would be definitely welcome. I should add that
Sociomantic has an interesting position: it's a 100% D shop so
interoperability is not a concern for them, and they did their own GC
so GC-related improvements are unlikely to make a large difference for
them. So "C++ and GC" is likely not to be high priority for them. --
Andrei


Exactly. C++ support is of no interest at all, and GC is something we
contribute to, rather than something we expect from the community.


That's awesome, thanks!


Interestingly we don't even care much about libraries, we've done
everything ourselves.

So what do we care about? Mainly, we care about improving the core product.

In general I think that in D we have always suffered from spreading
ourselves too thin. We've always had a bunch of cool new features that
don't actually work properly. Always, the focus shifts to something
else, before the previous feature was finished.

At Sociomantic, we've been successful in our industry using only the
features of D1. We're restricted to using D's features from 2007!!
Feature-wise, practically nothing from the last seven years has helped us!

With something like C++ support, it's only going to win companies over
when it is essentially complete. That means that working on it is a huge
investment that doesn't start to pay for itself for a very long time. So
although it's a great goal, with a huge potential payoff, I don't think
that it should be consuming a whole lot of energy right now.

And personally, I doubt that many companies would use D, even if with
perfect C++ interop, if the toolchain stayed at the current level.


That speculation turns out to not be true for Facebook. My turn to 
speculate - many other companies have existing codebases in C++, so 
Sociomantic is "special".



As I said in my Dconf 2013 talk -- I advocate a focus on Return On
Investment.
I'd love to see us chasing the easy wins.


That's of course good, but the reality is we're in a complicated 
trade-off space with "important", "urgent", "easy to do", "return on 
investment", "resource allocation" as axes. An example of the latter - 
ideally we'd put Walter on the more difficult tasks and others on the 
easy wins. Walter working on improving documentation might not be the 
best use of his time, although better documentation is an easy win.



Andrei



Re: Worse is better?

2014-10-08 Thread John Carter via Digitalmars-d

On Wednesday, 8 October 2014 at 19:44:04 UTC, Joakim wrote:
This is a somewhat famous phrase from a late '80s essay that's 
mentioned sometimes, but I hadn't read it till this week.


Keep reading, he is still pretty ambivalent about the whole 
concept still...


http://dreamsongs.com/WorseIsBetter.html


Re: On Phobos GC hunt

2014-10-08 Thread Peter Alexander via Digitalmars-d
On Wednesday, 8 October 2014 at 20:15:51 UTC, Steven 
Schveighoffer wrote:

On 10/8/14 4:10 PM, Andrei Alexandrescu wrote:

On 10/8/14, 1:01 PM, Andrei Alexandrescu wrote:




That's a bummer. Can we get the compiler to remove the "if 
(__ctfe)"

code after semantic checking?


Or would "static if (__ctfe)" work? -- Andrei


Please don't ask me to explain why, because I still don't know. 
But _ctfe is a normal runtime variable :) It has been explained 
to me before, why it has to be a runtime variable. I think Don 
knows the answer.


Well, the contents of the static if expression have to be 
evaluated at compile time, so static if (__ctfe) would always be 
true.


Also, if it were to somehow work as imagined then you'd have 
nonsensical things like this:


static if (__ctfe) class Wat {}
auto foo() {
  static if (__ctfe) return new Wat();
  return null;
}
static wat = foo();

wat now has a type at runtime that only exists at compile time.


Re: On Phobos GC hunt

2014-10-08 Thread ketmar via Digitalmars-d
On Wed, 8 Oct 2014 23:25:18 +0300
ketmar via Digitalmars-d  wrote:

> On Wed, 8 Oct 2014 23:20:13 +0300
> ketmar via Digitalmars-d  wrote:
> 
> p.s. or vice versa: "static if (__ctfe)" is always true, to non-ctfe
> code will be removed. sorry, i can't really remember what is true, but
> anyway, it works by removeing one of the branches altogether.
hm. i need some sleep. or new keyboard. or both.


signature.asc
Description: PGP signature


Re: On Phobos GC hunt

2014-10-08 Thread ketmar via Digitalmars-d
On Wed, 8 Oct 2014 23:20:13 +0300
ketmar via Digitalmars-d  wrote:

p.s. or vice versa: "static if (__ctfe)" is always true, to non-ctfe
code will be removed. sorry, i can't really remember what is true, but
anyway, it works by removeing one of the branches altogether.


signature.asc
Description: PGP signature


Re: On Phobos GC hunt

2014-10-08 Thread ketmar via Digitalmars-d
On Wed, 08 Oct 2014 13:10:11 -0700
Andrei Alexandrescu via Digitalmars-d 
wrote:

> Or would "static if (__ctfe)" work? -- Andrei
ha! The Famous Bug! it works, but not as people expected. as "static
if" evaluates when function is *compiling*, __ctfe is false there, and
so the whole "true" branch will be removed as dead code.

i believe that compiler should warn about this, 'cause i'm tend to
repeatedly hit this funny thing.


signature.asc
Description: PGP signature


Re: On Phobos GC hunt

2014-10-08 Thread bearophile via Digitalmars-d

Andrei Alexandrescu:


Or would "static if (__ctfe)" work? -- Andrei


Currently it doesn't work, because __ctfe is a run-time variable. 
Walter originally tried and failed to make it a compile-time 
variable.


Bye,
bearophile


Re: On Phobos GC hunt

2014-10-08 Thread Steven Schveighoffer via Digitalmars-d

On 10/8/14 4:10 PM, Andrei Alexandrescu wrote:

On 10/8/14, 1:01 PM, Andrei Alexandrescu wrote:




That's a bummer. Can we get the compiler to remove the "if (__ctfe)"
code after semantic checking?


Or would "static if (__ctfe)" work? -- Andrei


Please don't ask me to explain why, because I still don't know. But 
_ctfe is a normal runtime variable :) It has been explained to me 
before, why it has to be a runtime variable. I think Don knows the answer.


-Steve



Re: On Phobos GC hunt

2014-10-08 Thread Andrei Alexandrescu via Digitalmars-d

On 10/8/14, 1:01 PM, Andrei Alexandrescu wrote:

On 10/8/14, 1:13 AM, Johannes Pfau wrote:

Am Tue, 07 Oct 2014 21:59:05 +
schrieb "Peter Alexander" :


On Tuesday, 7 October 2014 at 20:13:32 UTC, Jacob Carlborg wrote:

I didn't look at any source code to see what "new" is actually
allocating, for example.


I did some random sampling, and it's 90% exceptions, with the
occasional array allocation.

I noticed that a lot of the ~ and ~= complaints are in code that
only ever runs at compile time (generating strings for mixin). I
wonder if there's any way we can silence these false positives.


Code in if(__ctfe) blocks could be (and should be) allowed:
https://github.com/D-Programming-Language/dmd/pull/3572

But if you have got a normal function (string generateMixin()) the
compiler can't really know that it's only used at compile time. And if
it's not a template the code using the GC will be compiled, even if
it's never called. This might be enough to get undefined symbol errors
if you don't have an GC, so the error messages are kinda valid.


That's a bummer. Can we get the compiler to remove the "if (__ctfe)"
code after semantic checking?


Or would "static if (__ctfe)" work? -- Andrei



Re: What are the worst parts of D?

2014-10-08 Thread Walter Bright via Digitalmars-d

On 10/8/2014 4:17 AM, Don wrote:

As I said in my Dconf 2013 talk -- I advocate a focus on Return On Investment.
I'd love to see us chasing the easy wins.


I love the easy wins, too. It'd be great if you'd start a thread about "Top 10 
Easy Wins" from yours and Sociomantic's perspective.


Note that I've done some work on the deprecations you've mentioned before.



Re: On Phobos GC hunt

2014-10-08 Thread Andrei Alexandrescu via Digitalmars-d

On 10/8/14, 1:13 AM, Johannes Pfau wrote:

Am Tue, 07 Oct 2014 21:59:05 +
schrieb "Peter Alexander" :


On Tuesday, 7 October 2014 at 20:13:32 UTC, Jacob Carlborg wrote:

I didn't look at any source code to see what "new" is actually
allocating, for example.


I did some random sampling, and it's 90% exceptions, with the
occasional array allocation.

I noticed that a lot of the ~ and ~= complaints are in code that
only ever runs at compile time (generating strings for mixin). I
wonder if there's any way we can silence these false positives.


Code in if(__ctfe) blocks could be (and should be) allowed:
https://github.com/D-Programming-Language/dmd/pull/3572

But if you have got a normal function (string generateMixin()) the
compiler can't really know that it's only used at compile time. And if
it's not a template the code using the GC will be compiled, even if
it's never called. This might be enough to get undefined symbol errors
if you don't have an GC, so the error messages are kinda valid.


That's a bummer. Can we get the compiler to remove the "if (__ctfe)" 
code after semantic checking?


Andrei


Re: What are the worst parts of D?

2014-10-08 Thread Jeremy Powers via Digitalmars-d
On Wed, Oct 8, 2014 at 12:00 PM, Jonathan via Digitalmars-d <
digitalmars-d@puremagic.com> wrote:
...

> 3) Taking a hint from the early success of Flash, add Derelict3 (or some
> basic OpenGL library) directly into Phobos. Despite some of the negatives
> (slower update cycle versus external lib), it would greatly add to D's
> attractiveness for new developers. I nearly left D after having a host
> issues putting Derelict3 into my project. If I had this issue, we may be
> missing out from attracting newbies looking to do gfx related work.
>

Personally I take the opposite view - I'd much prefer a strong and easily
consumed third-party library ecosystem than to shove everything into
Phobos.  Dub is a wonderful thing for D, and needs to be so good that
people use it by default.


Worse is better?

2014-10-08 Thread Joakim via Digitalmars-d
This is a somewhat famous phrase from a late '80s essay that's 
mentioned sometimes, but I hadn't read it till this week.  It's a 
fascinating one-page read, he predicted that lisp would lose out 
to C++ when he delivered this speech in 1990, well worth reading:


https://www.dreamsongs.com/RiseOfWorseIsBetter.html

Since "worse" and "better" are subjective terms, I interpret it 
as "simpler spreads faster and wider than complex."  He thinks 
simpler is worse and complex is often better, hence the title.  
Perhaps it's not as true anymore because that was the wild west 
of computing back then, whereas billions of people use the 
software built using these languages these days, so maybe we 
cannot afford to be so fast and loose.


What does this have to D?  Well, the phenomenon he describes 
probably has a big effect on D's adoption even today, as he was 
talking about the spread of programming languages, ones we use to 
this day.  Certainly worth thinking about, as we move forward 
with building D.


Re: What are the worst parts of D?

2014-10-08 Thread Jonathan via Digitalmars-d

My small list of D critiques/wishes from a pragmatic stance:

1) Replace the Stop the World GC
2) It would be great if dmd could provide a code-hinting 
facility, instead of relying on DCD which continually breaks for 
me. It would open more doors for editors to support better code 
completion.
3) Taking a hint from the early success of Flash, add Derelict3 
(or some basic OpenGL library) directly into Phobos. Despite some 
of the negatives (slower update cycle versus external lib), it 
would greatly add to D's attractiveness for new developers. I 
nearly left D after having a host issues putting Derelict3 into 
my project. If I had this issue, we may be missing out from 
attracting newbies looking to do gfx related work.


I'm sure this has been talked about, but I'll bring it up anyway:
To focus our efforts, consider switching to ldc. Is it worth 
people's time to continue to optimize DMD when we can accelerate 
our own efforts by relying on an existing compiler? As some have 
pointed out, our community is spread thin over so many efforts... 
perhaps there are ways to consolidate that.


Just my 2cents from a D hobbyist..


Re: On Phobos GC hunt

2014-10-08 Thread grm via Digitalmars-d
Was in a slight hurry and forgot to mention that I (quite sure: 
we all) very much appreciate the hands-on mentality your approach 
shows.


looking forward to v2

/Gerhard


Re: On Phobos GC hunt

2014-10-08 Thread Kagamin via Digitalmars-d
On Wednesday, 8 October 2014 at 12:09:00 UTC, Dmitry Olshansky 
wrote:
I think this and (2) can be solved if we come up with solid 
support for RC-closures.


Delegates don't obey data sharing type checks though. A long 
standing language issue.


Re: What are the worst parts of D?

2014-10-08 Thread Joakim via Digitalmars-d
On Wednesday, 8 October 2014 at 13:55:11 UTC, Manu via 
Digitalmars-d wrote:

On 08/10/2014 9:20 pm, "Don via Digitalmars-d"
So what do we care about? Mainly, we care about improving the 
core

product.


In general I think that in D we have always suffered from 
spreading
ourselves too thin. We've always had a bunch of cool new 
features that
don't actually work properly. Always, the focus shifts to 
something else,

before the previous feature was finished.


And personally, I doubt that many companies would use D, even 
if with
perfect C++ interop, if the toolchain stayed at the current 
level.


As someone who previously represented a business interest, I 
couldn't agree

more.
Aside from my random frustrated outbursts on a very small set 
of language
issues, the main thing I've been banging on from day 1 is the 
tooling. Much

has improved, but it's still a long way from 'good'.

Debugging, ldc (for windows), and editor integrations (auto 
complete,

navigation, refactoring tools) are my impersonal (and hopefully
non-controversial) short list. They trump everything else I've 
ever

complained about.
The debugging experience is the worst of any language I've used 
since the

90's, and I would make that top priority.


While it would be great if there were a company devoted to such D 
tooling, it doesn't exist right now.  It is completely 
unrealistic to expect a D community of unpaid volunteers to work 
on these features for your paid projects.  If anybody in the 
community cared as much about these features as you, they'd have 
done it already.


I suggest you two open bugzilla issues for all these specific 
bugs or enhancements and put up bounties for their development.  
If you're not willing to do that, expecting the community to do 
work for you for free is just whining that is easily ignored.


Re: Program logic bugs vs input/environmental errors

2014-10-08 Thread Bruno Medeiros via Digitalmars-d

On 03/10/2014 19:20, Sean Kelly wrote:

On Friday, 3 October 2014 at 17:38:40 UTC, Brad Roberts via
Digitalmars-d wrote:


The part of Walter's point that is either deliberately overlooked or
somewhat misunderstood here is the notion of a fault domain.  In a
typical unix or windows based environment, it's a process.  A fault
within the process yields the aborting of the process but not all
processes.  Erlang introduces within it's execution model a concept of
a process within the higher level notion of the os level process.
Within the erlang runtime it's individual processes run independently
and can each fail independently.  The erlang runtime guarantees a
higher level of separation than a typical threaded java or c++
application.  An error within the erlang runtime itself would
justifiably cause the entire system to be halted.  Just as within an
airplane, to use Walter's favorite analogy, the seat entertainment
system is physically and logically separated from flight control
systems thus a fault within the former has no impact on the latter.


Yep.  And I think it's a fair assertion that the default fault
domain in a D program is at the process level, since D is not
inherently memory safe.  But I don't think the language should
necessarily make that assertion to the degree that no other
definition is possible.


Yes to Brad, and then yes to Sean. That nailed the point.

To that I would only add that, when encountering a fault in a process, 
even an estimation (that is, not a 100% certainty) that such fault only 
affects a certain domain of the process, that would still be useful to 
certain kinds of systems and applications.


I don't think memory-safety is at the core of the issue. Java is 
memory-safe, yet if you encounter a null pointer exception, you're still 
not sure if your whole application is now in an unusable state, or if 
the NPE was just confined to say, the operation the user just tried to 
do, or some other component of the application. There are no guarantees.


--
Bruno Medeiros
https://twitter.com/brunodomedeiros


Re: Program logic bugs vs input/environmental errors

2014-10-08 Thread Bruno Medeiros via Digitalmars-d

On 04/10/2014 10:05, Walter Bright wrote:

On 10/1/2014 7:17 AM, Bruno Medeiros wrote:

Sean, I fully agree with the points you have been making so far.
But if Walter is fixated on thinking that all the practical uses of D
will be
critical systems, or simple (ie, single-use, non-interactive)
command-line
applications, it will be hard for him to comprehend the whole point
that "simply
aborting on error is too brittle in some cases".


Airplane avionics systems all abort on error, yet the airplanes don't
fall out of the sky.

I've explained why and how this works many times, here it is again:

http://www.drdobbs.com/architecture-and-design/safe-systems-from-unreliable-parts/228701716



That's completely irrelevant to the "simply aborting on error is too 
brittle in some cases" point above, because I wasn't talking about 
avionics systems, or any kind of mission critical systems at all. In 
fact, the opposite (non critical systems).


--
Bruno Medeiros
https://twitter.com/brunodomedeiros


Re: Program logic bugs vs input/environmental errors

2014-10-08 Thread Bruno Medeiros via Digitalmars-d

On 04/10/2014 09:39, Walter Bright wrote:

On 10/1/2014 6:17 AM, Bruno Medeiros wrote:

Walter, you do understand that not all software has to be robust -


Yes, I understand that.



in the critical systems sense - to be quality software? And that in
fact, the majority
of software is not critical systems software?...

I was under the impression that D was meant to be a general purpose
language,
not a language just for critical systems. Yet, on language design
issues, you
keep making a series or arguments and points that apply *only* to
critical
systems software.


If someone writes non-robust software, D allows them to do that.
However, I won't leave unchallenged attempts to pass such stuff off as
robust.

Nor will I accept such practices in Phobos, because, as this thread
clearly shows, there are a lot of misunderstandings about what robust
software is. Phobos needs to CLEARLY default towards solid, robust
practice.

It's really too bad that I've never seen any engineering courses on
reliability.

http://www.drdobbs.com/architecture-and-design/safe-systems-from-unreliable-parts/228701716



Well, I set myself a trap to get that response...

Of course, I too want my software to be robust! I doubt that anyone 
would disagree that Phobos should be designed to be as robust as 
possible. But "robust" is too general of a term to be precise here, so 
this belies my original point.


I did say robust-in-the-critical-systems-sense... What I was questioning 
was whether D and Phobos should be designed in a way that took critical 
systems software as its main use, and relegate the other kinds of 
software to secondary importance.


(Note: I don't think such dichotomy and compromise *has* to exist in 
order to design a great D and Phobos. But in this discussion I feel the 
choices and vision were heading in a way that would likely harm the 
development of general purpose software in favor of critical systems.)


--
Bruno Medeiros
https://twitter.com/brunodomedeiros


Re: What are the worst parts of D?

2014-10-08 Thread Manu via Digitalmars-d
On 08/10/2014 11:55 pm, "Manu"  wrote:
>
> On 08/10/2014 9:20 pm, "Don via Digitalmars-d" <
digitalmars-d@puremagic.com> wrote:
> >
> > On Monday, 6 October 2014 at 19:07:40 UTC, Andrei Alexandrescu wrote:
> >>
> >> On 10/6/14, 11:55 AM, H. S. Teoh via Digitalmars-d wrote:
> >>>
> >>> On Mon, Oct 06, 2014 at 06:13:41PM +, Dicebot via Digitalmars-d
wrote:
> 
>  On Monday, 6 October 2014 at 16:06:04 UTC, Andrei Alexandrescu wrote:
> >>>
> >>> [...]
> >
> > It would be terrific if Sociomantic would improve its communication
> > with the community about their experience with D and their needs
> > going forward.
> 
> 
>  How about someone starts paying attention to what Don posts? That
>  could be an incredible start. I spend great deal of time both reading
>  this NG (to be aware of what comes next) and writing (to express both
>  personal and Sociomantic concerns) and have literally no idea what
can
>  be done to make communication more clear.
> >>>
> >>>
> >>> I don't remember who it was, but I'm pretty sure *somebody* at
> >>> Sociomantic has stated clearly their request recently: Please break
our
> >>> code *now*, if it helps to fix language design issues, rather than
> >>> later.
> >>
> >>
> >> More particulars would be definitely welcome. I should add that
Sociomantic has an interesting position: it's a 100% D shop so
interoperability is not a concern for them, and they did their own GC so
GC-related improvements are unlikely to make a large difference for them.
So "C++ and GC" is likely not to be high priority for them. -- Andrei
> >
> >
> > Exactly. C++ support is of no interest at all, and GC is something we
contribute to, rather than something we expect from the community.
> > Interestingly we don't even care much about libraries, we've done
everything ourselves.
> >
> > So what do we care about? Mainly, we care about improving the core
product.
> >
> > In general I think that in D we have always suffered from spreading
ourselves too thin. We've always had a bunch of cool new features that
don't actually work properly. Always, the focus shifts to something else,
before the previous feature was finished.
> >
> > At Sociomantic, we've been successful in our industry using only the
features of D1. We're restricted to using D's features from 2007!!
Feature-wise, practically nothing from the last seven years has helped us!
> >
> > With something like C++ support, it's only going to win companies over
when it is essentially complete. That means that working on it is a huge
investment that doesn't start to pay for itself for a very long time. So
although it's a great goal, with a huge potential payoff, I don't think
that it should be consuming a whole lot of energy right now.
> >
> > And personally, I doubt that many companies would use D, even if with
perfect C++ interop, if the toolchain stayed at the current level.
> >
> > As I said in my Dconf 2013 talk -- I advocate a focus on Return On
Investment.
> > I'd love to see us chasing the easy wins.
>
> As someone who previously represented a business interest, I couldn't
agree more.
> Aside from my random frustrated outbursts on a very small set of language
issues, the main thing I've been banging on from day 1 is the tooling. Much
has improved, but it's still a long way from 'good'.
>
> Debugging, ldc (for windows), and editor integrations (auto complete,
navigation, refactoring tools) are my impersonal (and hopefully
non-controversial) short list. They trump everything else I've ever
complained about.
> The debugging experience is the worst of any language I've used since the
90's, and I would make that top priority.
>
> C++ might have helped us years ago, but I already solved those issues
creatively. Debugging can't be solved without tooling and compiler support.

Just to clarify, I'm all for nogc work; that is very important to us and I
appreciate the work, but I support that I wouldn't rate it top priority.
C++ is no significant value to me personally, or professionally. Game
studios don't use much C++, and like I said, we already worked around those
edges.

I can't speak for remedy now, but I'm confident that they will *need* ldc
working before the game ships. DMD codegen is just not good enough,
particularly relating to float; it uses the x87! O_O


Re: What are the worst parts of D?

2014-10-08 Thread Manu via Digitalmars-d
On 08/10/2014 9:20 pm, "Don via Digitalmars-d" 
wrote:
>
> On Monday, 6 October 2014 at 19:07:40 UTC, Andrei Alexandrescu wrote:
>>
>> On 10/6/14, 11:55 AM, H. S. Teoh via Digitalmars-d wrote:
>>>
>>> On Mon, Oct 06, 2014 at 06:13:41PM +, Dicebot via Digitalmars-d
wrote:

 On Monday, 6 October 2014 at 16:06:04 UTC, Andrei Alexandrescu wrote:
>>>
>>> [...]
>
> It would be terrific if Sociomantic would improve its communication
> with the community about their experience with D and their needs
> going forward.


 How about someone starts paying attention to what Don posts? That
 could be an incredible start. I spend great deal of time both reading
 this NG (to be aware of what comes next) and writing (to express both
 personal and Sociomantic concerns) and have literally no idea what can
 be done to make communication more clear.
>>>
>>>
>>> I don't remember who it was, but I'm pretty sure *somebody* at
>>> Sociomantic has stated clearly their request recently: Please break our
>>> code *now*, if it helps to fix language design issues, rather than
>>> later.
>>
>>
>> More particulars would be definitely welcome. I should add that
Sociomantic has an interesting position: it's a 100% D shop so
interoperability is not a concern for them, and they did their own GC so
GC-related improvements are unlikely to make a large difference for them.
So "C++ and GC" is likely not to be high priority for them. -- Andrei
>
>
> Exactly. C++ support is of no interest at all, and GC is something we
contribute to, rather than something we expect from the community.
> Interestingly we don't even care much about libraries, we've done
everything ourselves.
>
> So what do we care about? Mainly, we care about improving the core
product.
>
> In general I think that in D we have always suffered from spreading
ourselves too thin. We've always had a bunch of cool new features that
don't actually work properly. Always, the focus shifts to something else,
before the previous feature was finished.
>
> At Sociomantic, we've been successful in our industry using only the
features of D1. We're restricted to using D's features from 2007!!
Feature-wise, practically nothing from the last seven years has helped us!
>
> With something like C++ support, it's only going to win companies over
when it is essentially complete. That means that working on it is a huge
investment that doesn't start to pay for itself for a very long time. So
although it's a great goal, with a huge potential payoff, I don't think
that it should be consuming a whole lot of energy right now.
>
> And personally, I doubt that many companies would use D, even if with
perfect C++ interop, if the toolchain stayed at the current level.
>
> As I said in my Dconf 2013 talk -- I advocate a focus on Return On
Investment.
> I'd love to see us chasing the easy wins.

As someone who previously represented a business interest, I couldn't agree
more.
Aside from my random frustrated outbursts on a very small set of language
issues, the main thing I've been banging on from day 1 is the tooling. Much
has improved, but it's still a long way from 'good'.

Debugging, ldc (for windows), and editor integrations (auto complete,
navigation, refactoring tools) are my impersonal (and hopefully
non-controversial) short list. They trump everything else I've ever
complained about.
The debugging experience is the worst of any language I've used since the
90's, and I would make that top priority.

C++ might have helped us years ago, but I already solved those issues
creatively. Debugging can't be solved without tooling and compiler support.


Re: scope() statements and return

2014-10-08 Thread Regan Heath via Digitalmars-d
On Tue, 07 Oct 2014 14:39:06 +0100, Andrei Alexandrescu  
 wrote:



On 10/7/14, 12:36 AM, monarch_dodra wrote:

Hum... But arguably, that's just exception chaining "happening". Do you
have any examples of someone actually "dealing" with all the exceptions
in a chain in a catch, or actually using the information in a manner
that is more than just printing?


No. But that doesn't mean anything; all uses of exceptions I know of are  
used for just printing. -- Andrei


I have a couple of examples here in front of me.  This is in C#...

[not just for printing]
1. I catch a ChangeConflictException and attempt some basic automatic  
conflict resolution (i.e. column has changed in the database, but I have  
not changed the local version then merge the value from database)


[examining the chain]
2. I catch Exception then test if "ex is TransactionException" AND if  
"ex.InnerException is TimeoutException" (AKA first in chain) then raise a  
different sort of alert (for our GUI to display).


(FYI the reason I don't have a separate catch block for  
TransactionException specifically is that it would involve duplicating all  
the cleanup I am doing in this catch block, all for a 1 line "raise a  
different alert" call - it just didn't seem worth it)


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


A Value Range Propagation usage example, and more

2014-10-08 Thread bearophile via Digitalmars-d
This is the first part of a function to convert to base 58 (some 
letters are missing, like the upper case "I") used in the Bitcoin 
protocol:



alias Address = ubyte[1 + 4 + RIPEMD160_digest_len];

char[] toBase58(ref Address a) pure nothrow @safe {
static immutable symbols = "123456789" ~
   "ABCDEFGHJKLMNPQRSTUVWXYZ" ~
   "abcdefghijkmnopqrstuvwxyz";
static assert(symbols.length == 58);

auto result = new typeof(return)(34);
foreach_reverse (ref ri; result) {
uint c = 0;
foreach (ref ai; a) {
c = c * 256 + ai;
ai = cast(ubyte)(c / symbols.length);
c %= symbols.length;
}
ri = symbols[c];
}
...
}


The D type system isn't smart enough to see that "ai" is always 
fitting in an ubyte, so I have had to use a cast(ubyte). But 
casts are dangerous and their usage should be minimized, and 
to!ubyte is slow and makes the function not nothrow. So I've 
rewritten the code like this with a bit of algebraic rewriting:



char[] toBase58(ref Address a) pure nothrow @safe {
static immutable symbols = "123456789" ~
   "ABCDEFGHJKLMNPQRSTUVWXYZ" ~
   "abcdefghijkmnopqrstuvwxyz";
static assert(symbols.length == 58);

auto result = new typeof(return)(34);
foreach_reverse (ref ri; result) {
uint c = 0;
foreach (ref ai; a) {
immutable d = (c % symbols.length) * 256 + ai;
ai = d / symbols.length;
c = d;
}
ri = symbols[c % symbols.length];
}
...
}


Now it can be a little slower because the integer division and 
modulus has different divisors, so perhaps they can't be 
implemented with a little more than a single division, as before 
(I have not compared the assembly), but for the purposes of this 
code the performance difference is not a problem. Now the D type 
system is able to see that "ai" is always fitting in a ubyte, and 
there's no need for a cast. The compiler puts a safe implicit 
cast. This is awesome.


- - - - - - - - - - - - - -

But of course you often want more :-)

This is another case where the current D type system allows you 
to avoid a cast:


void main() {
char['z' - 'a' + 1] arr;

foreach (immutable i, ref c; arr)
c = 'a' + i;
}



But if you want to use ranges and functional UFCS chains you 
currently need the cast:



void main() {
import std.range, std.algorithm, std.array;

char[26] arr = 26
   .iota
   .map!(i => cast(char)('a' + i))
   .array;
}


In theory this program has the same compile-time information as 
the foreach case. In practice foreach is a built-in that enjoys 
more semantics than a iota+map.


Currently iota(26) loses the compile-time information about the 
range, so you can't do (note: the "max" attribute doesn't exists):


void main() {
import std.range: iota;
auto r = iota(26);
enum ubyte m = r.max;
}


Currently the only way to keep that compile-time information is 
to use a template argument:


void main() {
import std.range: tIota;
auto r = tIota!26;
enum ubyte m = r.max; // OK
}

But even if you write such tIota range, the map!() will lose the 
compile-time value range information. And even if you manage to 
write a map!() able to do it with template arguments, you have 
template bloat.


So there's a desire to manage the compile-time information (like 
value range information) of (number) literals without causing 
template bloat and without the need to use explicit template 
arguments.


Bye,
bearophile


Re: On Phobos GC hunt

2014-10-08 Thread Dmitry Olshansky via Digitalmars-d

On Wednesday, 8 October 2014 at 11:25:25 UTC, Johannes Pfau wrote:

Am Tue, 07 Oct 2014 15:57:58 +
schrieb "Dmitry Olshansky" :



Instead we need to observe patterns and label it automatically 
until the non-trivial subset remains. So everybody, please 
take time and identify simple patterns and post back your 
ideas on solution(s).




I just had a look at all closure allocations and identified 
these

patterns:


Awesome! This is exactly the kind of help I wanted.




1) Fixable by manually stack-allocating closure
   A delegate is passed to some function which stores this 
delegate and
   therefore correctly doesn't mark the parameter as scope. 
However,
   the lifetime of the stored delegate is still limited to the 
current
   function (e.g. it's stored in a struct instance, but on the 
stack).


   Can be fixed by creating a static struct{T... members; void
   doSomething(){access members}} instance on stack and passing
   &stackvar.doSomething as delegate.


Hm... Probably we can create a template for this.



2) Using delegates to add state to ranges
   
   return iota(dim).
 filter!(i => ptr[i])().
 map!(i => BitsSet!size_t(ptr[i], i * bitsPerSizeT))().
 joiner();
   
   This code adds state to ranges without declaring a new type: 
the ptr
   variable is not accessible and needs to be move into a 
closure.

   Declaring a custom range type is a solution, but not
   straightforward: If the ptr field is moved into the range a 
closure
   is not necessary. But if the range is copied, it's address 
changes

   and the delegate passed to map is now invalid.



Indeed, such code is fine in "user-space" but have no place in 
the library.



3) Functions taking delegates as generic parameters
   receiveTimeout,receive,formattedWrite accept different types,
   including delegates. The delegates can all be scope to avoid 
the
   allocation but is void foo(T)(scope T) a good idea? The 
alternative
   is probably making an overload for delegates with scope 
attribute.


   (The result is that all functions calling receiveTimeout,... 
with a

   delegate allocate a closure)

4) Solvable with manual memory management
   Some specific functions can't be easily fixed, but the 
delegates
   they create have a well defined lifetime (for example spawn 
creates
   a delegate which is only needed at the startup of a new 
thread, it's

   never used again). These could be malloc+freed.



I think this and (2) can be solved if we come up with solid 
support for RC-closures.



5) Design issue
   These functions generally create a delegate using variables 
passed
   in as parameters. There's no way to avoid closures here. 
Although
   manual allocation is an possible, the lifetime is undefined 
and can

   only be managed by the GC.

6) Other
   Two cases can be fixed by moving a buffer into a struct or 
moving a
   function out of a member function into it's surrounding 
class.




Yeah, there are always outliers ;)

Also notable: 17 out of 35 cases are in std.net.curl. This is 
because

curl heavily uses delegates and wrapper delegates.


Interesting... it must be due to cURL callback-based API.
All in all, std.net.curl is a constant source of complaints, it 
may need some work to fix other issues anyway.




Re: Program logic bugs vs input/environmental errors

2014-10-08 Thread Timon Gehr via Digitalmars-d

On 10/08/2014 05:19 AM, Walter Bright wrote:

On 10/7/2014 6:18 PM, Timon Gehr wrote:
 > I can report these if present.

Writing a strongly worded letter to the White Star Line isn't going to
help you when the ship is sinking in the middle of the North Atlantic.
...


Maybe it is going to help the next guy whose ship will not be sinking 
due to that report.



What will help is minimizing the damage that a detected fault may cause.
You cannot rely on the specification when a fault has been detected.
"This can't be happening!" are likely the last words of more than a few
people.



Sure, I agree.

Just note that if some programmer is checking for overflow after the 
fact using the following idiom:


int x=y*z;
if(x/y!=z) assert(0);

Then the language can be defined such that e.g.:

0. The overflow will throw on its own.

1. Overflow is undefined, i.e. the optimizer is allowed to remove the 
check and avoid the detection of the bug.


2. Guaranteed wrap-around behaviour makes the code valid and the bug is 
detected by the assert.


3. Arbitrary-precision integers.

4. ...

Code is simply less likely to run as intended or else abort if 
possibility 1 is consciously taken. The language implementation may 
still be buggy, but if it may even sink your ship when it generated code 
according to the specification, it likely sinks in more cases. Of course 
you can say that the programmer is at fault for checking for overflow in 
the wrong fashion, but this does not matter at the point where your ship 
is sinking. One may still see this choice as the right trade-off, but it 
is not the only possibility 'by definition'.




Re: On Phobos GC hunt

2014-10-08 Thread Johannes Pfau via Digitalmars-d
Am Tue, 07 Oct 2014 15:57:58 +
schrieb "Dmitry Olshansky" :

> 
> Instead we need to observe patterns and label it automatically 
> until the non-trivial subset remains. So everybody, please take 
> time and identify simple patterns and post back your ideas on 
> solution(s).
> 

I just had a look at all closure allocations and identified these
patterns:


1) Fixable by manually stack-allocating closure
   A delegate is passed to some function which stores this delegate and
   therefore correctly doesn't mark the parameter as scope. However,
   the lifetime of the stored delegate is still limited to the current
   function (e.g. it's stored in a struct instance, but on the stack).

   Can be fixed by creating a static struct{T... members; void
   doSomething(){access members}} instance on stack and passing
   &stackvar.doSomething as delegate.

2) Using delegates to add state to ranges
   
   return iota(dim).
 filter!(i => ptr[i])().
 map!(i => BitsSet!size_t(ptr[i], i * bitsPerSizeT))().
 joiner();
   
   This code adds state to ranges without declaring a new type: the ptr
   variable is not accessible and needs to be move into a closure.
   Declaring a custom range type is a solution, but not
   straightforward: If the ptr field is moved into the range a closure
   is not necessary. But if the range is copied, it's address changes
   and the delegate passed to map is now invalid.

3) Functions taking delegates as generic parameters
   receiveTimeout,receive,formattedWrite accept different types,
   including delegates. The delegates can all be scope to avoid the
   allocation but is void foo(T)(scope T) a good idea? The alternative
   is probably making an overload for delegates with scope attribute.

   (The result is that all functions calling receiveTimeout,... with a
   delegate allocate a closure)

4) Solvable with manual memory management
   Some specific functions can't be easily fixed, but the delegates
   they create have a well defined lifetime (for example spawn creates
   a delegate which is only needed at the startup of a new thread, it's
   never used again). These could be malloc+freed.

5) Design issue
   These functions generally create a delegate using variables passed
   in as parameters. There's no way to avoid closures here. Although
   manual allocation is an possible, the lifetime is undefined and can
   only be managed by the GC.

6) Other
   Two cases can be fixed by moving a buffer into a struct or moving a
   function out of a member function into it's surrounding class.


Also notable: 17 out of 35 cases are in std.net.curl. This is because
curl heavily uses delegates and wrapper delegates.


Re: What are the worst parts of D?

2014-10-08 Thread Don via Digitalmars-d
On Monday, 6 October 2014 at 19:07:40 UTC, Andrei Alexandrescu 
wrote:

On 10/6/14, 11:55 AM, H. S. Teoh via Digitalmars-d wrote:
On Mon, Oct 06, 2014 at 06:13:41PM +, Dicebot via 
Digitalmars-d wrote:
On Monday, 6 October 2014 at 16:06:04 UTC, Andrei 
Alexandrescu wrote:

[...]
It would be terrific if Sociomantic would improve its 
communication
with the community about their experience with D and their 
needs

going forward.


How about someone starts paying attention to what Don posts? 
That
could be an incredible start. I spend great deal of time both 
reading
this NG (to be aware of what comes next) and writing (to 
express both
personal and Sociomantic concerns) and have literally no idea 
what can

be done to make communication more clear.


I don't remember who it was, but I'm pretty sure *somebody* at
Sociomantic has stated clearly their request recently: Please 
break our
code *now*, if it helps to fix language design issues, rather 
than

later.


More particulars would be definitely welcome. I should add that 
Sociomantic has an interesting position: it's a 100% D shop so 
interoperability is not a concern for them, and they did their 
own GC so GC-related improvements are unlikely to make a large 
difference for them. So "C++ and GC" is likely not to be high 
priority for them. -- Andrei


Exactly. C++ support is of no interest at all, and GC is 
something we contribute to, rather than something we expect from 
the community.
Interestingly we don't even care much about libraries, we've done 
everything ourselves.


So what do we care about? Mainly, we care about improving the 
core product.


In general I think that in D we have always suffered from 
spreading ourselves too thin. We've always had a bunch of cool 
new features that don't actually work properly. Always, the focus 
shifts to something else, before the previous feature was 
finished.


At Sociomantic, we've been successful in our industry using only 
the features of D1. We're restricted to using D's features from 
2007!! Feature-wise, practically nothing from the last seven 
years has helped us!


With something like C++ support, it's only going to win companies 
over when it is essentially complete. That means that working on 
it is a huge investment that doesn't start to pay for itself for 
a very long time. So although it's a great goal, with a huge 
potential payoff, I don't think that it should be consuming a 
whole lot of energy right now.


And personally, I doubt that many companies would use D, even if 
with perfect C++ interop, if the toolchain stayed at the current 
level.


As I said in my Dconf 2013 talk -- I advocate a focus on Return 
On Investment.

I'd love to see us chasing the easy wins.






Re: Program logic bugs vs input/environmental errors

2014-10-08 Thread Nick Sabalausky via Digitalmars-d

On 10/07/2014 11:29 PM, Walter Bright wrote:

On 10/7/2014 3:54 PM, Nick Sabalausky wrote:

It's a salesman's whole freaking *job* is be a professional liar!


Poor salesmen are liars. But the really, really good ones are ones who
are able to match up what a customer needs with the right product for
him. There, he is providing a valuable service to the customer.



Can't say I've personally come across any of the latter (it relies on 
salesmen knowing what they're talking about and still working sales 
anyway - which I'm sure does occur for various reasons, but doesn't seem 
common from what I've seen). But maybe I've just spent far too much time 
at MicroCenter ;) Great store, but dumb sales staff ;)



Serve the customer well like that, and you get a repeat customer. I know
many salesmen who get my repeat business because of that.



Certainly reasonable points, and I'm glad to hear there *are* 
respectable ones out there.



The prof who taught me accounting used to sell cars. I asked him how to
tell a good dealership from a bad one. He told me the good ones have
been in business for more than 5 years, because by then one has run out
of new suckers and is relying on repeat business.



That sounds reasonable on the surface, but it relies on several 
questionable assumptions:


1. Suckers routinely know they've been suckered.

2. Suckers avoid giving repeat business to those who suckered them (not 
as reliable an assumption as one might expect)


3. The rate of loss on previous suckers overshadows the rate of new 
suckers. (Ex: No matter how badly people hate working at McDonald's, 
they're unlikely to run low on fresh applicants without a major 
birthrate decline - and even then they'd have 16 years to prepare)


4. Good dealerships don't become bad.

5. There *exists* a good one within a reasonable distance.

6. People haven't become disillusioned and given up on trying to find a 
good one (whether a good one exists or not, the effect here would be the 
same).


7. The bad ones aren't able to compete/survive through other means. 
(Cornering a market, mindshare, convincing ads, misc gimmicks, 
merchandising or other side-revenue streams, anti-competitive practices, 
etc.)


Also, the strategy has a potential self-destruct switch: Even if the 
strategy works, if enough people follow it then even good dealerships 
might not be able to survive the initial 5 year trial.


Plus, I know of a counter-example around here. From an insider, I've 
heard horror stories about the shit the managers, finance people, etc 
would pull. But they're a MAJOR dealer in the area and have been for as 
long as I can remember.



But then again, slots and video poker aren't exactly my thing anyway.
I'm from
the 80's: If I plunk coins into a machine I expect to get food,
beverage, clean
laundry, or *actual gameplay*. Repeatedly purchasing the message "You
loose"
while the entire building itself is treating me like a complete
brain-dead idiot
isn't exactly my idea of "addictive".


I found gambling to be a painful experience, not entertaining at all.


I actually enjoyed that evening quite a bit: A road trip with friends is 
always fun, as is seeing new places, and it was certainly a very pretty 
place inside (for a very good reason, of course). But admittedly, the 
psychological tricks were somewhat insulting, and by the time I got 
through the $20 I'd budgeted I had already gotten very, very bored with 
slot machines and video poker. And blackjack's something I'd gotten 
plenty of all the way back on the Apple II.


If I want to gamble I'll just go buy more insurance ;) Better odds.

Or stock market. At least that doesn't have quite as much of a "house" 
that "always wins", at least not to the same extent.




Re: Program logic bugs vs input/environmental errors

2014-10-08 Thread eles via Digitalmars-d
On Wednesday, 8 October 2014 at 08:16:08 UTC, Nick Sabalausky 
wrote:

On 10/07/2014 08:37 PM, H. S. Teoh via Digitalmars-d wrote:

On 10/08/2014 12:10 AM, Nick Sabalausky wrote:
[...]



equivalent to choosing *both* of the other two doors.


Yeah, I think is the best way to put it.


Re: RFC: tagged pointer

2014-10-08 Thread bearophile via Digitalmars-d

deadalnix:

There is something that is badly needed in std.butmanip : a way 
to create tagged pointers. It is doable safely by checking 
pointer alignment and allowing for n bits to be taken for 
various things.


It can't be used for GC-managed pointers. A possible usage syntax:

enum Tag1 { A, B, C, D }
alias TP1 = TaggedPointer!Tag1; // Uses 2 bits.
enum Tag2 { A, B }
alias TP2 = TaggedPointer!Tag2; // Uses 1 bit.
enum Tag3 { A, B, C, D, E } // 5 possibilities
alias TP3 = TaggedPointer!Tag3; // Uses 3 bits.

Alternative name "TaggedPtr".
It should verify some things statically about the given enum.

I wrote a tagged pointer struct in D1 to implement a more 
memory-succinct Trie (I think the tag was one bit, to tell apart 
nodes the terminate a word from the other nodes). But now I can't 
find the source code...


Bye,
bearophile


Re: Program logic bugs vs input/environmental errors

2014-10-08 Thread Nick Sabalausky via Digitalmars-d

On 10/07/2014 08:37 PM, H. S. Teoh via Digitalmars-d wrote:

On 10/08/2014 12:10 AM, Nick Sabalausky wrote:
[...]

I've managed to grok it, but yet even I (try as I may) just cannot
truly grok the monty hall problem. I *can* reliably come up with the
correct answer, but *never* through an actual mental model of the
problem, *only* by very, very carefully thinking through each step of
the problem. And that never changes no matter how many times I think
it through.

[...]

The secret behind the monty hall scenario, is that the host is actually
leaking extra information to you about where the car might be.
[...]


Hmm, yea, that is a good thing to realize about it.

I think a big part of what trips me up is applying a broken, contorted 
version of the "coin toss" reasoning (coin toss == my usual approach to 
probability).


Because of the "coin toss" problem, I'm naturally inclined to see past 
events as irrelevant. So, initial impression is: I see two closed doors, 
an irrelevant open door, and a choice: "Closed door A or Closed door B?".


Obviously, it's a total fallacy to assume "well, if there's two choices 
then they must be weighted equally." But, naturally, I figure that all 
three doors initially have equal changes, and so I'm already thinking 
"unweighted, even distribution", and then bam, my mind (falsely) sums 
up: "Two options, uniform distribution, third door isn't a choice so 
it's just a distraction. Therefore, 50/50".


Now yes, I *do* see several fallacies/oversights/mistakes in that, but 
that's how my mind tries to setup the problem. So then I wind up working 
backwards from that or forcing myself to abandon it by starting from the 
beginning and carefully working it through.


Another way to look at it, very similar to yours actually, and I think 
more or less the way the kid presented it in "21" (but in a typical 
hollywood "We hate exposition with a passion, so just rush through it as 
hastily as possible, we're only including it because writers expect us 
to, so who cares if nobody can follow it" style):


1. Three equal choices: 1/3 I'm right, 2/3 I'm wrong.

2. New choice: Bet that I was right (1/3), bet that I was wrong (2/3)

3. "An extra 33% chance for free? Sure, I'll take it."

Hmm, looking at it now, I guess the second choice is simply *inverting* 
you first choice. Ahh, now I get what the kid (and you) was saying much 
better: Choosing "I'll change my guess" is equivalent to choosing *both* 
of the other two doors.


The fact that he opens one of those other two doors is a complete 
distraction and totally irrelevent. Makes you think you're only choosing 
"the other ONE door" when you're really choosing "the other TWO doors". 
Interesting.


See, this is why I love this NG :)



Re: On Phobos GC hunt

2014-10-08 Thread Johannes Pfau via Digitalmars-d
Am Tue, 07 Oct 2014 21:59:05 +
schrieb "Peter Alexander" :

> On Tuesday, 7 October 2014 at 20:13:32 UTC, Jacob Carlborg wrote:
> > I didn't look at any source code to see what "new" is actually 
> > allocating, for example.
> 
> I did some random sampling, and it's 90% exceptions, with the 
> occasional array allocation.
> 
> I noticed that a lot of the ~ and ~= complaints are in code that 
> only ever runs at compile time (generating strings for mixin). I 
> wonder if there's any way we can silence these false positives.

Code in if(__ctfe) blocks could be (and should be) allowed:
https://github.com/D-Programming-Language/dmd/pull/3572

But if you have got a normal function (string generateMixin()) the
compiler can't really know that it's only used at compile time. And if
it's not a template the code using the GC will be compiled, even if
it's never called. This might be enough to get undefined symbol errors
if you don't have an GC, so the error messages are kinda valid.


Re: On Phobos GC hunt

2014-10-08 Thread Dmitry Olshansky via Digitalmars-d

On Tuesday, 7 October 2014 at 16:23:19 UTC, grm wrote:
1.) It may be helpful to reduce the noise in that every match 
after a new is ignored (and probaly multiple 'operator ~' 
alarms within the same statement).


The tool currently is quick line-based hack, hence no notion of 
statement.
It's indeed a good idea to merge all messages for one statement 
and de-duplicate on per statement basis.



2.) There seems to be a problem with repeated alarms:
When viewing the page source, this link shows up numerous 
times. See

https://github.com/D-Programming-Language//phobos/blob/d4d98124ab6cbef7097025a7cfd1161d1963c87e/std/conv.d#L688


There are lots of toImpl overloads, deduplication is done on 
module:LOC basis so the all show up. Going to fix in v2 to merge 
all of them in one row.




/Gerhard




Re: Program logic bugs vs input/environmental errors

2014-10-08 Thread eles via Digitalmars-d

On Wednesday, 8 October 2014 at 07:51:39 UTC, eles wrote:

On Tuesday, 7 October 2014 at 23:49:37 UTC, Timon Gehr wrote:

On 10/08/2014 12:10 AM, Nick Sabalausky wrote:

On 10/07/2014 06:47 AM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?=
" wrote:

On Tuesday, 7 October 2014 at 08:19:15 UTC, Nick Sabalausky





against a series with first 100 heads and the 101rd being a


well, 101st


Re: On Phobos GC hunt

2014-10-08 Thread Dmitry Olshansky via Digitalmars-d

On Tuesday, 7 October 2014 at 21:59:08 UTC, Peter Alexander wrote:
On Tuesday, 7 October 2014 at 20:13:32 UTC, Jacob Carlborg 
wrote:
I didn't look at any source code to see what "new" is actually 
allocating, for example.


I did some random sampling, and it's 90% exceptions, with the 
occasional array allocation.




That's interesting. I suspected around 50%. Well that's even 
better since if we do ref-counted exceptions we solve 90% of 
problem ;)


I noticed that a lot of the ~ and ~= complaints are in code 
that only ever runs at compile time (generating strings for 
mixin). I wonder if there's any way we can silence these false 
positives.


I'm going to use blacklist for these as compiler can't in general 
know if it is going to be used exclusively at CTFE or not.


Okay,  I think I should go a bit futher with the second version 
of the tool.


Things on todo list:
 - make tool general enough to work for any GitHub based project 
(and hackable for other hostings)

 - use Brian's D parser to accurately find artifacts
 - detect "throw new SomeStuff" pattern and automatically 
populate potential fix line
 - list all source links in one coulmn for the same function 
(this needs proper parser)
 - use blacklist of : to filter out 
CTFE
 - use current data from wiki for "potential fix" column if 
present


Holy grail is:
 - plot DOT call-graph of GC-users, with leafs being the ones 
reported by -vgc. So I start with this list then add functions 
them, then functions that use these functions and so on.




Re: Program logic bugs vs input/environmental errors

2014-10-08 Thread eles via Digitalmars-d

On Tuesday, 7 October 2014 at 23:49:37 UTC, Timon Gehr wrote:

On 10/08/2014 12:10 AM, Nick Sabalausky wrote:

On 10/07/2014 06:47 AM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?=
" wrote:

On Tuesday, 7 October 2014 at 08:19:15 UTC, Nick Sabalausky



What is this reason?


This one:

Result of coin tossing is independent at any current attempt. It 
does not depend of past or future results. Probability is 0.5


On the other hand, obtaining a series with 100 heads in a row is 
very small (exactly because of the independence).


Obtaining a series with 101 heads in a row is even smaller, so 
people will assume that the 101 tossing should probably give a 
"tails".


But they forget that the probability of a 101 series where first 
100 are heads and the 101 is tails is exactly *the same* as the 
probability of a 101 series of heads.


They compare the probability of a 101 series of heads with the 
probability of a series of 100 heads instead of comparing it 
against a series with first 100 heads and the 101rd being a tail.


It is the bias choice (we have tendency to compare things that 
are easier - not more pertinent - to compare).


Re: Program logic bugs vs input/environmental errors

2014-10-08 Thread eles via Digitalmars-d
On Wednesday, 8 October 2014 at 07:00:38 UTC, Dominikus Dittes 
Scherkl wrote:

On Wednesday, 8 October 2014 at 01:22:49 UTC, Timon Gehr wrote:


If he would ever open the right door, you would just take it 
too.


Almost. If he opens the winning door, he gives you another very 
important information: the correctness of your first choice. If 
you already know if your first choice is correct or wrong, then 
having the host opening a door (does not matter which of the 
remaining two, in this case) solves the problem without ambiguity.


But, when you make your second choice, you still not know if your 
first choice was correct or not. The only thing that you know is 
that the chance that your first choice was correct is two times 
less than the chance it was wrong.


So you bet that your first choice was wrong, and you move on to 
the next problem, which, assuming this bet, now becomes a 
non-ambiguous problem.


The key is this: "how would a third person bet on my first 
choice?" Reasonably, he would bet that the choice is wrong. So 
why wouldn't I do the same?


Re: Program logic bugs vs input/environmental errors

2014-10-08 Thread eles via Digitalmars-d
On Wednesday, 8 October 2014 at 07:35:25 UTC, Nick Sabalausky 
wrote:

On 10/07/2014 07:49 PM, Timon Gehr wrote:

On 10/08/2014 12:10 AM, Nick Sabalausky wrote:
Ex: A lot of people have trouble understanding that getting 
"heads" in a
coinflip many times in a row does *not* increase the 
likelihood of the

next flip being "tails". And there's a very understandable


Of course it does not increase the probability to get a "tails". 
Actually, it increases the probability that you'll get "heads" 
again.


For the simplest explanation, see here:

http://batman.wikia.com/wiki/Two-Face's_Coin


Re: Program logic bugs vs input/environmental errors

2014-10-08 Thread ketmar via Digitalmars-d
On Tue, 7 Oct 2014 17:37:51 -0700
"H. S. Teoh via Digitalmars-d"  wrote:

> told in a deliberately misleading way -- the fact that the host
> *never* opens the right door is often left as an unstated "common
> sense" assumption, thereby increasing the likelihood that people will
> overlook this minor but important detail.
that's why i was always against this "problem". if you giving me the
logic problem, give me all information. anything not written clear
can't be the part of the problem. that's why the right answer for the
problem when i didn't told that the host never opens the right door is
"50/50".


signature.asc
Description: PGP signature


Re: Program logic bugs vs input/environmental errors

2014-10-08 Thread Nick Sabalausky via Digitalmars-d

On 10/07/2014 07:49 PM, Timon Gehr wrote:

On 10/08/2014 12:10 AM, Nick Sabalausky wrote:

Ex: A lot of people have trouble understanding that getting "heads" in a
coinflip many times in a row does *not* increase the likelihood of the
next flip being "tails". And there's a very understandable reason why
that's difficult to grasp.


What is this reason? It would be really spooky if the probability was
actually increased in this way. You could win at 'heads or tails' by
flipping a coin really many times until you got a sufficiently long run
of 'tails', then going to another room and betting that the next flip
will be 'heads', and if people didn't intuitively understand that, some
would actually try to apply this trick. (Do they?)



I have actually met a lot of people who instinctively believe that 
getting "tails" many times in a row means that "heads" becomes more and 
more inevitable. Obviously they're wrong about that, but I think I *do* 
understand how they get tripped up:


What people *do* intuitively understand is that the overall number of 
"heads" and "tails" are likely to be similar. Moreover, statistically 
speaking, the more coin tosses there are, the more the overall past 
results tend to converge towards 50%/50%. (Which is pretty much what's 
implied by "uniform random distribution".) This much is pretty easy for 
people to intuitively understand, even if they don't know the 
mathematical details.


As a result, people's mental models will usually involve some general 
notion of "There's a natural tendency for the 'heads' and 'tails' to 
even out" Unfortunately, that summary is...well...partly truth but also 
partly inaccurate.


So they take that kinda-shaky and not-entirely-accurate (but still 
*partially* true) mental summary and are then faced with the coin toss 
problem: "You've gotten 'tails' 10,000 times in a row." "Wow, really? 
That many?" So then the questionable mental model kicks in: "...natural 
tendency to even out..." The inevitable result? "Wow, I must be overdue 
for a heads!" Fallacious certainly, but also common and somewhat 
understandable.


Another aspect that can mix people up:

If you keep flipping the coin, over and over and over, it *is* very 
likely that at *some* point you'll get a "heads". That much *is* true 
and surprises nobody. Unfortunately, as true as it is, it's *not* 
relevant to individual tosses: They're individual likelihoods *always* 
stay the same: 50%. So we seemingly have a situation where something 
("very, very likely to get a heads") is true of the whole *without* 
being true of *any* of the individual parts. While that does occur, it 
isn't exactly a super-common thing in normal everyday life, so it can be 
unintuitive for people.


And another confusion:

Suppose we rephrase it like this: "If you keep tossing a coin, how 
likely are you to get 10,000 'tails' in a row AND then get ANOTHER 
'tails'?" Not very freaking likely, of course: 1 in 2^10,001. But 
*after* those first 10,000 'tails' have already occurred, the answer 
changes completely.


What? Seriously? Math that totally changes based on "when"?!? But 2+2 is 
*always* 4!! All of a sudden, here we have a math where your location on 
the timeline is *crucially* important[1], and that's gotta trip up some 
of the people who (like everyone) started out with math just being 
arithmetic.


[1] Or at least time *appears* to be crucially important, depending on 
your perspective: We could easily say that "time" is nothing more than 
an irrelevant detail of the hypothetical scenario and the *real* 
mathematics is just one scenario of "I have 10,001 samples of 50% 
probability" versus a completely different scenario of "I have 10,000 
samples of 100% probability and 1 sample of 50% probability". Of course, 
deciding which of those problems is the one we're actually looking at 
involves considering where you are on the timeline.




That said, it's just: When you first randomly choose the door, you would
intuitively rather bet that you guessed wrong. The show master is simply
proposing to tell you behind which of the other doors the car is in case
you indeed guessed wrong.

There's not more to it.



Hmm, yea, an interesting way to look at it.



Re: Program logic bugs vs input/environmental errors

2014-10-08 Thread eles via Digitalmars-d
On Wednesday, 8 October 2014 at 07:00:38 UTC, Dominikus Dittes 
Scherkl wrote:

On Wednesday, 8 October 2014 at 01:22:49 UTC, Timon Gehr wrote:



It doesn't matter if he uses his knowledge to open always a
false door.


It does. Actually, this is the single most important thing.


It only matters that you open your door AFTER him,
which allows you to react on the result of his door. If you
open the door first, your chance is only 1/3.


If his choice would be completely random, as you seem to suggest 
above (because, actually, its choice is conditioned by your first 
choice) then, even if you open a door after him, the only thing 
you have is the fact that you are now in problem with 50% 
probability to win.


If you remove the above piece of information, you could simply 
assume that there are only two doors and you are to open one of 
them. In this case is just 50/50.


Re: Program logic bugs vs input/environmental errors

2014-10-08 Thread Dominikus Dittes Scherkl via Digitalmars-d

On Wednesday, 8 October 2014 at 01:22:49 UTC, Timon Gehr wrote:
The secret behind the monty hall scenario, is that the host is 
actually

leaking extra information to you about where the car might be.

You make a first choice, which has 1/3 chance of being right, 
then the
host opens another door, which is *always* wrong. This last 
part is
where the information leak comes from.  The host's choice is 
*not* fully
random, because if your initial choice was the wrong door, 
then he is
*forced* to pick the other wrong door (because he never opens 
the right
door, for obvious reasons), thereby indirectly revealing which 
is the

right door.  So we have:

1/3 chance: you picked the right door. Then the host can 
randomly choose

between the 2 remaining doors. In this case, no extra info is
revealed.

2/3 chance: you picked the wrong door, and the host has no 
choice but to

pick the other wrong door, thereby indirectly revealing the
right door.

So if you stick with your initial choice, you have 1/3 chance 
of
winning, but if you switch, you have 2/3 chance of winning, 
because if
your initial choice was wrong, which is 2/3 of the time, the 
host is

effectively leaking out the right answer to you.

The supposedly counterintuitive part comes from wrongly 
assuming that
the host has full freedom to pick which door to open, which he 
does not

But yes. He has. It makes no difference.
If he would ever open the right door, you would just take it too.
So if the win is behind the two doors you did not choose first,
you will always get it.

The problem with this explanation is simply that it is too long 
and calls the overly detailed reasoning a 'secret'. :o)


So take this shorter explanation:

"There are three doors and two of them are opened, one by him
and one by you. So the chance to win is two out of three."
It doesn't matter if he uses his knowledge to open always a
false door. It only matters that you open your door AFTER him,
which allows you to react on the result of his door. If you
open the door first, your chance is only 1/3.