Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread Jacob Carlborg via Digitalmars-d

On 2016-07-12 06:37, Walter Bright wrote:


The example you gave of .ptr resulting in unsafe code has been in
bugzilla since 2013, and has an open PR on it to fix it.

  https://issues.dlang.org/show_bug.cgi?id=11176

You didn't submit it to bugzilla - if you don't post problems to
bugzilla, most likely they will get overlooked, and you will get
frustrated. @safe issues are tagged with the 'safe' keyword in bugzilla.
If you know of other bugs with @safe, and they aren't in the list,
please add them. Saying generically that @safe has holes in it is
useless information since it is not actionable and nobody keeps track of
bugs posted on the n.g. nor are they even findable if you suspect
they're there.


Not sure if this is what deadalnix thinks about but @safe should be a 
whitelist of features, not a blacklist [1]. But you already closed that 
bug report as invalid.


[1] https://issues.dlang.org/show_bug.cgi?id=12941

--
/Jacob Carlborg


Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread Walter Bright via Digitalmars-d

On 7/11/2016 10:42 PM, Ola Fosheim Grøstad wrote:

So yes, many languages are drawing on those principles in their type
systems.


D draws many features that come from functional programming. But I am 
not aware of any that come specifically from Prolog.


And just to be clear, D aims to be a useful programming language. It is 
not intended as a vehicle for programming language research.


It is also entirely possible for a research language to advance the 
state of the art (i.e. programming language theory), but not be useful. 
That is not what D is about, though.


D's job is to get s*** done quickly and effiently and make money for the 
businesses that use it.




Re: Card on fire

2016-07-11 Thread Jacob Carlborg via Digitalmars-d

On 2016-07-11 23:23, Walter Bright wrote:


Yeah, I consider myself very lucky, as I leave the machine on all the
time. I was sitting next to it when it burst into flame, and cutting the
power put it out. In the future I plan on cutting the time for it to go
into hibernation. Fortunately it's a metal case that doesn't burn, but
I'm still thinking about bending some tin to act as baffles over the
vents, and setting it on a metal plate.


I put my computer into sleep mode when I'm not using it for a longer 
period of time. It seems to shutdown everything but keeps some power to 
keep the data in RAM alive. So it turn back on instantly.


--
/Jacob Carlborg


Re: Evaluation order of "+="

2016-07-11 Thread Patrick Schluter via Digitalmars-d

On Tuesday, 12 July 2016 at 05:46:58 UTC, Patrick Schluter wrote:

On Tuesday, 12 July 2016 at 00:16:58 UTC, deadalnix wrote:

On Monday, 11 July 2016 at 23:31:40 UTC, Danika wrote:

On Monday, 11 July 2016 at 23:04:00 UTC, Johan Engelen wrote:
LDC recently changed the evaluation order of "+=" (I think 
unintentionally, some other eval order problems were fixed). 
Now, it is different from DMD.
I am going to argue that I think DMD's order is more useful 
in the context of fibers, and would like your opinion.


I really think it is a bug, in C it prints 10. And following 
the flow, you changed the sum to 9 and after that added 1, so 
It would be 10.6


In C, it is UB.


A function call is a sequence point in C, so it is not UB.


UB happens when a variable is changed more than once between 
sequence points, which is not the case here.


What happens here in LDC is probably that the call is inlined 
and therefore losing the sequence point. That is one of the 
dangers of aggressive inlining. The behaviour described by op 
would be definitly a bug in C. For D I don't know as I don't 
know if the sequence point rules are as strictly defined as in 
C.





Re: Evaluation order of "+="

2016-07-11 Thread Patrick Schluter via Digitalmars-d

On Tuesday, 12 July 2016 at 00:16:58 UTC, deadalnix wrote:

On Monday, 11 July 2016 at 23:31:40 UTC, Danika wrote:

On Monday, 11 July 2016 at 23:04:00 UTC, Johan Engelen wrote:
LDC recently changed the evaluation order of "+=" (I think 
unintentionally, some other eval order problems were fixed). 
Now, it is different from DMD.
I am going to argue that I think DMD's order is more useful 
in the context of fibers, and would like your opinion.


I really think it is a bug, in C it prints 10. And following 
the flow, you changed the sum to 9 and after that added 1, so 
It would be 10.6


In C, it is UB.


A function call is a sequence point in C, so it is not UB. What 
happens here in LDC is probably that the call is inlined and 
therefore losing the sequence point. That is one of the dangers 
of aggressive inlining. The behaviour described by op would be 
definitly a bug in C. For D I don't know as I don't know if the 
sequence point rules are as strictly defined as in C.


Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread Ola Fosheim Grøstad via Digitalmars-d

On Tuesday, 12 July 2016 at 05:30:57 UTC, Walter Bright wrote:

On 7/11/2016 10:01 PM, Ola Fosheim Grøstad wrote:

Of course logic programming has had a big impact on state of
the art.

Prolog -> Datalog
Datalog -> magic sets
magic sets -> inference engines
inference engines  -> static analysis

And that is only a small part of it.


Can you trace any Prolog innovations in 
C/C++/Java/C#/Go/Rust/D/Swift/Javascript/Fortran/Lisp?


I think you are taking the wrong view here. Logic programming is 
a generalized version of functional programming where you can 
have complex expressions in the left hand side. It is basically 
unification:


https://en.wikipedia.org/wiki/Unification_(computer_science)#Application:_Unification_in_logic_programming

So yes, many languages are drawing on those principles in their 
type systems.





Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread Walter Bright via Digitalmars-d

On 7/11/2016 10:15 PM, Shachar Shemesh wrote:

D says any such cast is UB.


That's why such casts are not allowed in @safe code. There's also no way 
to write a storage allocator in @safe code.


Code that is not checkably safe is needed in real world programming. The 
difference between D and C++ here is that D provides a means of marking 
such code as unsafe so the rest can be checkably safe, and C++ does not.




Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread Andrei Alexandrescu via Digitalmars-d

On 07/12/2016 01:15 AM, Shachar Shemesh wrote:

On 08/07/16 22:26, Andrei Alexandrescu wrote:


I agree with that. What would be a good example? Where is the reference
to Walter's promotion of UB in @safe code?


Andrei


I don't have an example by Walter, but I can give you an example by Andrei.

In D-Conf.

On Stage.

During the keynotes.

Immediately after knocking down C++ for doing the precise same thing,
but in a that is both defined and less likely to produce errors.


Love the drama. I was quite excited to see what follows :o).


The topic was reference counting's interaction with immutable (see
deadalnix's comment, to which I completely agree, about inter-features
interactions).


Amaury failed to produce an example to support his point, aside from a 
rehash of a bug report from 2013 that is virtually fixed. Do you have any?



When asked (by me) how you intend to actually solve this,
you said that since you know where the memory comes from, you will cast
away the immutability.

Casting away immutability is UB in D.


I understand. There is an essential detail that sadly puts an 
anticlimactic end to the telenovela. The unsafe cast happens at 
allocator level. Inside any memory allocator, there is a point at which 
behavior outside the type system happens: memory that is untyped becomes 
typed, and vice versa (during deallocation). As long as you ultimately 
use system primitives from getting untyped bytes, at some point you'll 
operate outside the type system. It stands to reason, then, that at 
allocator level information and manipulations outside the type system's 
capabilities are possible and level so long as such manipulations are 
part of the standard library and offer defined behavior. This is par for 
the course in C++ and any systems language.


The solution (very ingenious, due to dicebot) in fact does not quite 
cast immutability away. Starting from a possibly immutable pointer, it 
subtracts an offset from it. At that point the memory is not tracked by 
the type system, but known to the allocator to contain metadata 
associated with the pointer that had been allocated with it. After the 
subtraction, the cast exposes the data which is mutable without 
violating the immutability of the object proper. As I said, it's quite 
an ingenious solution.



Not long before that, you laughed at C++ for it's "mutable" keyword,
which allows doing this very thing in a way that is:
A. Fully defined (if you know what you're doing)
and
B. Not requiring a cast


I think we're in good shape with what we have; mutable has too much 
freedom and it's good to get away without it.




Andrei



Re: UB in D

2016-07-11 Thread Shachar Shemesh via Digitalmars-d

On 10/07/16 02:44, H. S. Teoh via Digitalmars-d wrote:

I find this rather disturbing, actually.  There is a fine line between
taking advantage of assert's to elide stuff that the programmer promises
will not happen, and eliding something that's defined to be UB and
thereby resulting in memory corruption.


I like clang's resolution to this problem. On the one hand, leaving 
things undefined allows the compiler to optimize away cases that would, 
otherwise, be horrible for performance.


On the other hand, these optimizations sometimes turn code that was 
meant to be okay into really not okay.


LLVM, at least for C and C++, has an undefined behavior sanitizer. You 
can turn it on, and any case where a test that superficial reading of 
the code suggests takes place, but was optimized away due to undefined 
behavior, turns into a warning. This allows you to write code in a sane 
way while not putting in a ton (metric or otherwise, as I won't fight 
over 10% difference) of security holes.


Shachar


Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread Walter Bright via Digitalmars-d

On 7/11/2016 10:01 PM, Ola Fosheim Grøstad wrote:

Of course logic programming has had a big impact on state of
the art.

Prolog -> Datalog
Datalog -> magic sets
magic sets -> inference engines
inference engines  -> static analysis

And that is only a small part of it.


Can you trace any Prolog innovations in 
C/C++/Java/C#/Go/Rust/D/Swift/Javascript/Fortran/Lisp?




Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread Jack Stouffer via Digitalmars-d

On Tuesday, 12 July 2016 at 04:37:06 UTC, Walter Bright wrote:
If I may rant a bit, lots of posters here posit that with "more 
process", everything will go better.


Gah, I hate this idea. It's pervasive in every office in the 
country. "Oh if we just had better tools we could manage our 
projects better." Meanwhile the manager on the project hasn't 
checked in with the engineers in week and probably has no idea 
what they're working on.


It's a people problem 99% of the time.



Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread Shachar Shemesh via Digitalmars-d

On 08/07/16 22:26, Andrei Alexandrescu wrote:


I agree with that. What would be a good example? Where is the reference
to Walter's promotion of UB in @safe code?


Andrei


I don't have an example by Walter, but I can give you an example by Andrei.

In D-Conf.

On Stage.

During the keynotes.

Immediately after knocking down C++ for doing the precise same thing, 
but in a that is both defined and less likely to produce errors.



The topic was reference counting's interaction with immutable (see 
deadalnix's comment, to which I completely agree, about inter-features 
interactions). When asked (by me) how you intend to actually solve this, 
you said that since you know where the memory comes from, you will cast 
away the immutability.


Casting away immutability is UB in D.

Not long before that, you laughed at C++ for it's "mutable" keyword, 
which allows doing this very thing in a way that is:

A. Fully defined (if you know what you're doing)
and
B. Not requiring a cast


C++ fully defines when it is okay to cast away constness, gives you aids 
so that you know that that's what you are doing, and nothing else, and 
gives you a method by which you can do it without a cast if the 
circumstances support it.


D says any such cast is UB.

Shachar


Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread Ola Fosheim Grøstad via Digitalmars-d

On Tuesday, 12 July 2016 at 04:52:14 UTC, Walter Bright wrote:
"Prolog and other logic programming languages have not had a 
significant impact on the computer industry in general."


  https://en.wikipedia.org/wiki/Prolog#Limitations

So, no.


That appears to be a 1995 reference from a logic programming 
languages conference. Of course logic programming has had a big 
impact on state of the art.


Prolog -> Datalog
Datalog -> magic sets
magic sets -> inference engines
inference engines  -> static analysis

And that is only a small part of it.

I'm afraid that is seriously mistaken about C++'s influence on 
the state of the art, in particular compile time polymorphism


Nah. You are confusing state-of-the-art with widespread system 
support.


Also, although C++ did not invent OOP, OOP's late 1980s surge 
in use, popularity, and yes, *influence* was due entirely to


In commercial application development sure. In terms of OOP 
principles and implementation, hell no.




Re: exceptions vs error codes

2016-07-11 Thread Walter Bright via Digitalmars-d

On 7/11/2016 9:10 PM, Chris Wright wrote:

On Mon, 11 Jul 2016 06:13:00 -0700, Walter Bright wrote:


On 7/9/2016 8:02 PM, Superstar64 wrote:

Would it be possible and a good idea to have a language feature that
allows some exceptions to use error code code generation.


If you want to return an error code, return an error code. No language
feature is required.


I think the intent is to tell the compiler: the error case is very
common, so make that case faster at the expense of the happy path.


I really have no idea what the benefit is of telling the compiler that.



Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread Walter Bright via Digitalmars-d

On 7/11/2016 7:46 PM, Ola Fosheim Grøstad wrote:

On Monday, 11 July 2016 at 22:09:11 UTC, Walter Bright wrote:

at one or more of the factors, Scheme included. Not Prolog either, a
singularly useless, obscure and failed language.


Err... Prolog is in use and has been far more influential on the state
of art


"Prolog and other logic programming languages have not had a significant 
impact on the computer industry in general."


  https://en.wikipedia.org/wiki/Prolog#Limitations

So, no.



than C++ or D ever will.


I'm afraid that is seriously mistaken about C++'s influence on the state 
of the art, in particular compile time polymorphism and the work of Igor 
Stepanov, and D's subsequent influence on C++.


Also, although C++ did not invent OOP, OOP's late 1980s surge in use, 
popularity, and yes, *influence* was due entirely to C++. I was there, 
in the center of that storm. Other languages fell all over themselves to 
add OOP extensions due to this. Java, Pascal, C#, etc. owe their OOP 
abilities to C++'s influence. Even Fortran got on the bandwagon.





Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread Walter Bright via Digitalmars-d

On 7/11/2016 7:23 PM, deadalnix wrote:

On Tuesday, 12 July 2016 at 01:28:31 UTC, Walter Bright wrote:

I don't see anything actionable in your comment.


Defining by which way @safe actually ensure safety would be a good start.

I'm sorry for the frustration, but the "mention a problem, get asked for
an example, provide example, example is debated to death while problem
is ignored" cycle have become the typical interraction pattern around
here and that is VERY frustrating.



The example you gave of .ptr resulting in unsafe code has been in 
bugzilla since 2013, and has an open PR on it to fix it.


  https://issues.dlang.org/show_bug.cgi?id=11176

You didn't submit it to bugzilla - if you don't post problems to 
bugzilla, most likely they will get overlooked, and you will get 
frustrated. @safe issues are tagged with the 'safe' keyword in bugzilla. 
If you know of other bugs with @safe, and they aren't in the list, 
please add them. Saying generically that @safe has holes in it is 
useless information since it is not actionable and nobody keeps track of 
bugs posted on the n.g. nor are they even findable if you suspect 
they're there.




If I may rant a bit, lots of posters here posit that with "more 
process", everything will go better. Meanwhile, we DO have process for 
bug reports. They go to bugzilla. Posting bugs to the n.g. does not 
work. More process doesn't work if people are unwilling to adhere to it.





Re: exceptions vs error codes

2016-07-11 Thread Chris Wright via Digitalmars-d
On Mon, 11 Jul 2016 06:13:00 -0700, Walter Bright wrote:

> On 7/9/2016 8:02 PM, Superstar64 wrote:
>> Would it be possible and a good idea to have a language feature that
>> allows some exceptions to use error code code generation.
> 
> If you want to return an error code, return an error code. No language
> feature is required.

I think the intent is to tell the compiler: the error case is very 
common, so make that case faster at the expense of the happy path.

Which is fine in theory, but probably not worth the effort, and it isn't 
really the greatest usecase for exceptions in general. If errors are so 
common, you probably want to impress that on the programmer with an 
explicit status code.

For my part, I tend to follow the C# pattern of `foo` and `tryFoo`. I'm 
just writing a library. You can tell me whether this piece of data is 
trusted to be correct or not.


Re: D is crap

2016-07-11 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 11 July 2016 at 18:14:11 UTC, Paulo Pinto wrote:

Actually NeXTStep drivers were written in Objective-C.



NeXT was a cool concept, but it was sad that  they picked such an 
annoying language to build it.


They are not alone, as of Android N, Google is making it pretty 
clear that if one tries to circuvent the constrained set of NDK 
APIs and workaround the JNI to
access existing shared objects, the application will be simply 
be killed.


I don't do Android programming, but NDK is actually fairly rich 
in comparison to Apple OSes without Objective-C bindings AFAIK. 
The problem seems to be more in the varying hardware 
configurations / quality of implementation.


Not using Java on Android sounds like a PITA to be honest.

If you check the latest BUILD, the current approach being 
evangelised is .NET Native for 90% of the code, C++/CX or plain 
C++ with WRL for glueing to low level code until C# gets the 
missing features from System C#, and C++ for everything else.


I don't know much about .NET Native, does it apply to or will 
they bring it to .NET Core?


A change in recent years is that Microsoft appears to invest more 
in their C++ offering, so apparently they no longer see C# as a 
wholesale replacement.


The WinRT, User Driver Framework, the new container model and 
Linux subsystem, the Checked C, input to the C++ Core


I haven't paid much attention to WinRT lately, they have a Linux 
subsystem?




Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 11 July 2016 at 22:09:11 UTC, Walter Bright wrote:
at one or more of the factors, Scheme included. Not Prolog 
either, a singularly useless, obscure and failed language.


Err... Prolog is in use and has been far more influential on the 
state of art than C++ or D ever will.


I think this discussion is dead...




Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread Andrei Alexandrescu via Digitalmars-d

On 07/11/2016 01:50 PM, deadalnix wrote:

On Friday, 8 July 2016 at 19:26:59 UTC, Andrei Alexandrescu wrote:

On 07/08/2016 02:42 PM, deadalnix wrote:

It is meaningless because sometime, you have A and B that are both safe
on their own, but doing both is unsafe. In which case A or B need to be
banned, but nothing allows to know which one. This isn't a bug, this is
a failure to have a principled approach to safety.


What would be a good example? Is there a bug report for it?



For instance:

@safe
int foo(int *iPtr) {
 return *iPtr;
}

@safe
int bar(int[] iSlice) {
 return foo(iSlice.ptr);
}


Here bar should not pass the @safe test because it may produce a 
non-dereferenceable pointer. Consider:


@safe int[] baz(int[] a) { return bar(a[$ .. $]; }

It is legal (and safe) to take an empty slice at the end of an array. 
Following the call, bar serves foo an invalid pointer that shan't be 
dereferenced.


I added https://issues.dlang.org/show_bug.cgi?id=16266. It looks to me 
like a corner case rather than an illustration of a systemic issue.



foo assume that creating an invalid pointer is not safe, while bar
assume that .ptr is safe as it doesn't access memory. If the slice's
size is 0, that is not safe.

This is one such case where each of this operation is safe granted some
preconditions, but violate each other's preconditions so using both is
unsafe.


The position is inconsistent because the dictatorship refuses to
compromise on mutually exclusive goals. For instance, @safe is defined
as ensuring memory safety. But not against undefined behaviors (in fact
Walter promote the use of UB in various situations, for instance when it
comes to shared). You CANNOT have undefined behavior that are defined as
being memory safe.


I agree with that. What would be a good example? Where is the
reference to Walter's promotion of UB in @safe code?



I don't have a specific reference to point to right now.


"Don't do the crime if you can't do the time."


Andrei



Re: Evaluation order of "+="

2016-07-11 Thread deadalnix via Digitalmars-d

On Monday, 11 July 2016 at 23:04:00 UTC, Johan Engelen wrote:
LDC recently changed the evaluation order of "+=" (I think 
unintentionally, some other eval order problems were fixed). 
Now, it is different from DMD.
I am going to argue that I think DMD's order is more useful in 
the context of fibers, and would like your opinion.


Consider this code:
```
int sum;

int return1_add9tosum() {
sum += 9;
return 1;
}

void main() {
sum = 0;
sum += return1_add9tosum();

import std.stdio;
writeln(sum);
}
```
DMD 2.071 prints "10".
LDC master prints "1". (LDC 1.0.0 prints "10")

I find the spec [1] to be unclear on this point, so which one 
is correct?


The bug was caught by code involving fibers. Instead of 
`return1_add9tosum`, a function `return1_yieldsFiber` is 
called, and multiple fibers write to `sum`. In that case, upon 
completing the "+=", an older version of `sum` is used to 
calculate the result. I think that it is best to do what DMD 
does, such that fibers can do "+=" without worrying about 
yields on the rhs.


[1] https://dlang.org/spec/expression.html


There was a very lenghty discussion about this in the past. DMD 
is correct on that one. The semantic is such as :


int plusEqual(ref int a, int b) {
  a = a + b;
  return a;
}



Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread deadalnix via Digitalmars-d

On Tuesday, 12 July 2016 at 01:28:31 UTC, Walter Bright wrote:

I don't see anything actionable in your comment.


Defining by which way @safe actually ensure safety would be a 
good start.


I'm sorry for the frustration, but the "mention a problem, get 
asked for an example, provide example, example is debated to 
death while problem is ignored" cycle have become the typical 
interraction pattern around here and that is VERY frustrating.




Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread Walter Bright via Digitalmars-d

On 7/11/2016 5:34 PM, sarn wrote:

Walter said "all programming languages", but he's obviously referring to
the programming market D is in.


I said "all USEFUL programming languages", thereby excluding toys, 
research projects, etc.


Of course, "useful" is a slippery concept, but a good proxy is having 
widespread adoption.


There are good reasons why Detroit's "concept cars" never survived 
intact into production. And why the windscreen on an airliner is flat 
instead of curved like the rest of the exterior.


Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread Walter Bright via Digitalmars-d

On 7/11/2016 5:15 PM, deadalnix wrote:

On Monday, 11 July 2016 at 21:52:36 UTC, Walter Bright wrote:

The root problem is that "@safe guarantee memory safety and if it
doesn't it is a bug" provides no information as to what is the bug here
and no actionable items as to how to fix it, or even as to what needs
fixing.


It's kind of a meaningless criticism. Any piece of code has a bug if
it doesn't meet the specification, and there's no way to verify it
meets the specification short of proofs, and if anyone wants to work
on proofs I'm all for it.

In the meantime, please post all holes found to bugzilla and tag them
with the 'safe' keyword.


You know, there is a saying: "When the wise point at the moon, the idiot
look at the finger". I can't force you to look at the moon, I can only
point at it.


I don't see anything actionable in your comment.


Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread deadalnix via Digitalmars-d

On Tuesday, 12 July 2016 at 00:34:12 UTC, sarn wrote:

On Monday, 11 July 2016 at 22:09:11 UTC, Walter Bright wrote:

On 7/10/2016 10:07 PM, Ola Fosheim Grøstad wrote:

[Snip stuff about Scheme]

Scheme is a really nice, elegant language that's fun to hack 
with, but at the end of the day, if people were writing Nginx, 
or the Windows kernel, or HFT systems in Scheme, you can bet 
programmers would be pushing pretty hard for special exceptions 
and hooks and stuff for better performance or lower-level 
access, and eventually you'd end up with another C.


Walter said "all programming languages", but he's obviously 
referring to the programming market D is in.


This is a false dichotomy. Nobody says there should be no 
inconsistencies. Sometime, it is just necessary because of other 
concerns. But it is a cost, and, like all costs, must pay for 
itself.


Most of these do not pay for themselves in D.



Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread sarn via Digitalmars-d

On Monday, 11 July 2016 at 22:09:11 UTC, Walter Bright wrote:

On 7/10/2016 10:07 PM, Ola Fosheim Grøstad wrote:

[Snip stuff about Scheme]

Scheme is a really nice, elegant language that's fun to hack 
with, but at the end of the day, if people were writing Nginx, or 
the Windows kernel, or HFT systems in Scheme, you can bet 
programmers would be pushing pretty hard for special exceptions 
and hooks and stuff for better performance or lower-level access, 
and eventually you'd end up with another C.


Walter said "all programming languages", but he's obviously 
referring to the programming market D is in.


Re: Evaluation order of "+="

2016-07-11 Thread deadalnix via Digitalmars-d

On Monday, 11 July 2016 at 23:31:40 UTC, Danika wrote:

On Monday, 11 July 2016 at 23:04:00 UTC, Johan Engelen wrote:
LDC recently changed the evaluation order of "+=" (I think 
unintentionally, some other eval order problems were fixed). 
Now, it is different from DMD.
I am going to argue that I think DMD's order is more useful in 
the context of fibers, and would like your opinion.


I really think it is a bug, in C it prints 10. And following 
the flow, you changed the sum to 9 and after that added 1, so 
It would be 10.6


In C, it is UB.



Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread deadalnix via Digitalmars-d

On Monday, 11 July 2016 at 21:52:36 UTC, Walter Bright wrote:
The root problem is that "@safe guarantee memory safety and if 
it
doesn't it is a bug" provides no information as to what is the 
bug here
and no actionable items as to how to fix it, or even as to 
what needs

fixing.


It's kind of a meaningless criticism. Any piece of code has a 
bug if it doesn't meet the specification, and there's no way to 
verify it meets the specification short of proofs, and if 
anyone wants to work on proofs I'm all for it.


In the meantime, please post all holes found to bugzilla and 
tag them with the 'safe' keyword.


You know, there is a saying: "When the wise point at the moon, 
the idiot look at the finger". I can't force you to look at the 
moon, I can only point at it.




Re: Evaluation order of "+="

2016-07-11 Thread Danika via Digitalmars-d

On Monday, 11 July 2016 at 23:04:00 UTC, Johan Engelen wrote:
LDC recently changed the evaluation order of "+=" (I think 
unintentionally, some other eval order problems were fixed). 
Now, it is different from DMD.
I am going to argue that I think DMD's order is more useful in 
the context of fibers, and would like your opinion.


I really think it is a bug, in C it prints 10. And following the 
flow, you changed the sum to 9 and after that added 1, so It 
would be 10.6


Evaluation order of "+="

2016-07-11 Thread Johan Engelen via Digitalmars-d
LDC recently changed the evaluation order of "+=" (I think 
unintentionally, some other eval order problems were fixed). Now, 
it is different from DMD.
I am going to argue that I think DMD's order is more useful in 
the context of fibers, and would like your opinion.


Consider this code:
```
int sum;

int return1_add9tosum() {
sum += 9;
return 1;
}

void main() {
sum = 0;
sum += return1_add9tosum();

import std.stdio;
writeln(sum);
}
```
DMD 2.071 prints "10".
LDC master prints "1". (LDC 1.0.0 prints "10")

I find the spec [1] to be unclear on this point, so which one is 
correct?


The bug was caught by code involving fibers. Instead of 
`return1_add9tosum`, a function `return1_yieldsFiber` is called, 
and multiple fibers write to `sum`. In that case, upon completing 
the "+=", an older version of `sum` is used to calculate the 
result. I think that it is best to do what DMD does, such that 
fibers can do "+=" without worrying about yields on the rhs.


[1] https://dlang.org/spec/expression.html


Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread Walter Bright via Digitalmars-d

On 7/10/2016 10:07 PM, Ola Fosheim Grøstad wrote:

Face it, your argument is destroyed :-)

Of course not.


Trying to reparse and reframe your answers isn't going to help. I know 
all those rhetorical tricks .


I wrote:

> All useful computer languages are unprincipled and complex due to a 
number of factors: [...]


to which you replied:

> not true

But there are no examples of such a language that doesn't fail at one or 
more of the factors, Scheme included. Not Prolog either, a singularly 
useless, obscure and failed language. You could come up with lists of 
ever more obscure languages, but that just adds to the destruction of 
your argument.


The fact that other languages like C++ are adopting feature after 
feature from D proves that there's a lot of dazz in D!


Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread Walter Bright via Digitalmars-d

On 7/11/2016 11:57 AM, deadalnix wrote:

Alright, but keep in mind that is an example, not the actual problem I'm
talking about. There are many reasonable way to make the example above
safe: disallow dereferencing pointers from unknown source,


Once you're in @safe code, the assumption is that pointers are valid. 
Unknown sources are marked @trusted, where the programmer takes 
responsibility to ensure they are valid.




do a bound check on .ptr, disallow .ptr altogether and much more.


The PR disallows .ptr in @safe code. The @safe alternative is &a[0] 
which implies a bounds check.




The root problem is that "@safe guarantee memory safety and if it
doesn't it is a bug" provides no information as to what is the bug here
and no actionable items as to how to fix it, or even as to what needs
fixing.


It's kind of a meaningless criticism. Any piece of code has a bug if it 
doesn't meet the specification, and there's no way to verify it meets 
the specification short of proofs, and if anyone wants to work on proofs 
I'm all for it.


In the meantime, please post all holes found to bugzilla and tag them 
with the 'safe' keyword.




Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread Walter Bright via Digitalmars-d

On 7/11/2016 10:50 AM, deadalnix wrote:

foo assume that creating an invalid pointer is not safe, while bar
assume that .ptr is safe as it doesn't access memory. If the slice's
size is 0, that is not safe.


There's a PR to fix this:

https://github.com/dlang/dmd/pull/5860




Re: UB in D

2016-07-11 Thread Walter Bright via Digitalmars-d

On 7/11/2016 11:47 AM, deadalnix wrote:

As you can see the behavior of each component here is fairly reasonable.
However, the end result may not be.


As was mentioned elsewhere, integers getting indeterminate values only 
results in memory corruption if the language has an unsafe memory model.


The solution is something like this:

https://github.com/dlang/dlang.org/pull/1420


Re: Card on fire

2016-07-11 Thread Walter Bright via Digitalmars-d

On 7/11/2016 7:49 AM, Meta wrote:

It wouldn't happen to be an nVidia card, would it?


MSI GeForce GT 720 DirectX 12 N720-1GD3HLP 1GB 64-Bit DDR3 PCI Express 
2.0 x16 HDCP Ready Video Card


The fire happened at the junction between the graphics card and the 
motherboard. I'm not totally sure which failed. Could have been a bad 
capacitor on the motherboard. Maybe it was a defect in the connector.


It had been in near continuous use for over a year.


Re: Card on fire

2016-07-11 Thread Walter Bright via Digitalmars-d

On 7/11/2016 6:06 AM, Steven Schveighoffer wrote:

That's kind of scary. It's one of those things you don't think about
happening -- like what if you weren't home if this happened.

Could have been a lot worse.


Yeah, I consider myself very lucky, as I leave the machine on all the 
time. I was sitting next to it when it burst into flame, and cutting the 
power put it out. In the future I plan on cutting the time for it to go 
into hibernation. Fortunately it's a metal case that doesn't burn, but 
I'm still thinking about bending some tin to act as baffles over the 
vents, and setting it on a metal plate.


Re: Card on fire

2016-07-11 Thread Walter Bright via Digitalmars-d

On 7/9/2016 7:43 PM, Andrei Alexandrescu wrote:

Got a text from Walter - his famous fanless graphics card caught fire
along with the motherboard. He'll be outta commission for a few days. --


I can do email and such with my laptop, but it is not suitable for 
development.


I spent yesterday cleaning many many years of dust out of my office :-) 
though the burned plastic smell lingers.


A new mobo and graphics card were overnighted, hopefully nothing else 
got fried.


Re: Card on fire

2016-07-11 Thread Walter Bright via Digitalmars-d

On 7/9/2016 9:11 PM, A.B wrote:

https://www.youtube.com/watch?v=NOErZuzZpS8


You must be older than me to think of that song!


Re: D is crap

2016-07-11 Thread Charles Hixson via Digitalmars-d
Garbage collection allows many syntax "liberalizations" that lack of 
garbage collection renders either impossible or highly dangerous.  (In 
this definition of "garbage collection" I'm including variations like 
reference counting.)  For an example of this consider the dynamic array 
type.  You MUST have garbage collection to use that safely...unless you 
require the freeing of memory with every change in size.  C++ does that 
with the STL, but if you want the dynamic types built into the language, 
then you need garbage collection built into the language.  (This is 
different from saying it needs to be active everywhere, but once you've 
got it, good places to use it keep showing up.)


One of the many advantages of the dynamic array type being built into 
the language is that arrays of different sizes are reasonably comparable 
by methods built into the language.  This is used all over the place.  
In D I almost never need to use "unchecked conversion".


On 07/11/2016 02:30 AM, Chris via Digitalmars-d wrote:

On Sunday, 10 July 2016 at 03:25:16 UTC, Ola Fosheim Grøstad wrote:


Just like there is no C++ book that does not rant about how great 
RAII is... What do you expect from a language evangelic? The first 
Java implementation Hotspot inherited its technology from StrongTalk, 
a Smalltalk successor. It was not a Java phenomenon, and FWIW both 
Lisp, Simula and Algol68 were garbage collected.


Please stop intentionally missing the point. I don't care if Leonardo 
Da Vinci already had invented GC - which wouldn't surprise me - but 
this is not the point. My point is that GC became a big thing in the 
late 90ies early 2000s which is in part owed to Java having become the 
religion of the day (not Lisp or SmallTalk)[1]. D couldn't have 
afforded not to have GC when it first came out. It was expected of a 
(new) language to provide GC by then - and GC had become a selling 
point for new languages.


[1] And of course computers had become more powerful and could handle 
the overhead of GC better than in the 80ies.


What was "new" with Java was compile-once-run-everywhere. Although, 
that wasn't new either, but it was at least marketable as new.


Java was the main catalyst for GC - or at least for people demanding 
it. Practically everybody who had gone through IT courses, college 
etc. with Java (and there were loads) wanted GC. It was a given for 
many people.


Well, yes, of course Java being used in universities created a demand 
for Java and similar languages. But GC languages were extensively 
used in universities before Java.


Yes, it didn't last long. But the fact that they bothered to 
introduce it, shows you how big GC was/is.


No, it shows how demanding manual reference counting was in 
Objective-C on regular programmers. GC is the first go to solution 
for easy memory management, and has been so since the 60s. Most high 
level languages use garbage collection.


It wasn't demanding. I wrote a lot of code in Objective-C and it was 
perfectly doable. You even have features like `autorelease` for return 
values. The thing is that Apple had become an increasingly popular 
platform and more and more programmers were writing code for OS X. So 
they thought, they'd make it easier and reduce potential memory leaks 
(introduced by not so experienced Objective-C coders) by adding GC, 
especially because a lot of programmers expected GC "in this day and 
age".






Re: D is crap

2016-07-11 Thread Jacob Carlborg via Digitalmars-d

On 2016-07-11 14:23, Luís Marques wrote:


Doesn't seem to work for me on 10.11.5. Maybe you need to enable that on
the latest OSes?


It works for me. I don't recall specifically enabling crash reports. Are 
you looking at "All Messages"? You can also look at 
~/Library/Logs/DiagnosticReports to see if a new file shows up.



In any case, that will probably get you a mangled stack
trace, right?


Well, OS X doesn't no anything about D mangling ;). But it will demangle 
C++ symbols.



It would still be useful (especially if the stack trace if
correct, in LLDB I get some crappy ones sometimes) but it would not be
as convenient as the stack trace on Windows generated by the druntime.


Yes, of course.

--
/Jacob Carlborg


Re: Go's march to low-latency GC

2016-07-11 Thread deadalnix via Digitalmars-d

On Monday, 11 July 2016 at 13:05:09 UTC, Russel Winder wrote:

Agreed. I don't know why golang guys bother about it.


Because they have nothing else to propose than massive goroutine 
orgy so they kind of have to make it work.



Maybe because they are developing a language for the 1980s?

;-)


It's not like they are using the Plan9 toolchain...

Ho wait...



Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread deadalnix via Digitalmars-d
On Monday, 11 July 2016 at 18:00:20 UTC, Steven Schveighoffer 
wrote:

On 7/11/16 1:50 PM, deadalnix wrote:
On Friday, 8 July 2016 at 19:26:59 UTC, Andrei Alexandrescu 
wrote:

On 07/08/2016 02:42 PM, deadalnix wrote:
It is meaningless because sometime, you have A and B that 
are both safe
on their own, but doing both is unsafe. In which case A or B 
need to be
banned, but nothing allows to know which one. This isn't a 
bug, this is

a failure to have a principled approach to safety.


What would be a good example? Is there a bug report for it?



For instance:

@safe
int foo(int *iPtr) {
return *iPtr;
}

@safe
int bar(int[] iSlice) {
return foo(iSlice.ptr);
}

foo assume that creating an invalid pointer is not safe, while 
bar
assume that .ptr is safe as it doesn't access memory. If the 
slice's

size is 0, that is not safe.


That was reported and being worked on:

https://github.com/dlang/dmd/pull/5860

-Steve


Alright, but keep in mind that is an example, not the actual 
problem I'm talking about. There are many reasonable way to make 
the example above safe: disallow dereferencing pointers from 
unknown source, do a bound check on .ptr, disallow .ptr 
altogether and much more.


The root problem is that "@safe guarantee memory safety and if it 
doesn't it is a bug" provides no information as to what is the 
bug here and no actionable items as to how to fix it, or even as 
to what needs fixing.




Re: UB in D

2016-07-11 Thread deadalnix via Digitalmars-d

On Saturday, 9 July 2016 at 23:44:07 UTC, H. S. Teoh wrote:
I find this rather disturbing, actually.  There is a fine line 
between taking advantage of assert's to elide stuff that the 
programmer promises will not happen, and eliding something 
that's defined to be UB and thereby resulting in memory 
corruption.


[...]


T


While I understand how frustrating it looks, there is simply no 
other way around in practice. For instance, the shift operation 
on x86 is essentially :


x >> (y & ((1 << (typeof(x).sizeof * 8)) - 1))

But will differs on other plateforms. This means that in 
practice, the compiler would have to add bound checks on every 
shift. The performance impact would be through the roof, plus, 
you'd have to specify what to do in case of out of range shift.


Contrary to popular belief, the compiler do not try to screw you 
with UB. There is no code of the form "if this is UB, then so 
this insanely stupid shit". But what happen is that algorithm A 
do not explore the UB case - because it is UB - and just do 
nothing with it, and algorithm B on his side do not check care 
for UB, but will reuse results from A and do something unexpected.


In Andrei's example, the compiler won't say, fuck this guy, he 
wrote an UB. What will happen is that range checking code will 
conclude that 9 >> something must be smaller than 10. The the 
control flow simplification code will use that range to conclude 
that the bound check must be always true and replace it with an 
unconditional branch.


As you can see the behavior of each component here is fairly 
reasonable. However, the end result may not be.


Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread Jack Stouffer via Digitalmars-d

On Monday, 11 July 2016 at 18:18:22 UTC, deadalnix wrote:

Lisp.


Which one?


Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread deadalnix via Digitalmars-d

On Saturday, 9 July 2016 at 00:14:34 UTC, Walter Bright wrote:

On 7/8/2016 2:58 PM, Ola Fosheim Grøstad wrote:

On Friday, 8 July 2016 at 21:24:04 UTC, Walter Bright wrote:

On 7/7/2016 5:56 PM, deadalnix wrote:
While this very true, it is clear that most D's complexity 
doesn't come from
there. D's complexity come for the most part from things 
being completely

unprincipled and lack of vision.


All useful computer languages are unprincipled and complex 
due to a number of

factors:


I think this is a very dangerous assumption. And also not true.


Feel free to post a counterexample. All you need is one!



Lisp.



What is true is that it is difficult to gain traction if a 
language does not

look like a copy of a pre-existing and fairly popular language.


I.e. Reason #2:

"what programmers perceive as logical and intuitive is often 
neither logical nor intuitive to a computer"


That's why we have compiler writer and language designers.



Re: D is crap

2016-07-11 Thread Paulo Pinto via Digitalmars-d
On Monday, 11 July 2016 at 16:44:27 UTC, Ola Fosheim Grøstad 
wrote:

On Monday, 11 July 2016 at 16:26:11 UTC, Paulo Pinto wrote:

Happy not to disappoint.  :)


You never disappoint in the GC department ;-)

OS vendors are the ones that eventually decided what is a 
systems programming language on their OSes.


To a large extent on Apple and Microsoft OSes. Not so much on 
open source OSes as you are not tied down to binary blobs.


And if they say so, like Apple is nowadays doing with Swift, 
developers will have no option other than accept it or move to 
other platform, regardless of their opinion what features a 
systems programming languages should offer.


It is true that there have been policy changes which makes it 
difficult to access features like GPU and Audio on OS-X/iOS 
without touching Objective-C or Swift. You don't have to use it 
much, but you need some binding stubs in Objective-C or 
Objective-C++ if you want to be forward compatible (i.e. link 
on future versions of the OS without recompiling).


But I _have_ noticed that Apple increasingly is making low 
level setup only available through Objective-C/Swift. It is 
probably a lock-in strategy to raise porting costs to Android.


Actually NeXTStep drivers were written in Objective-C.

http://www.cilinder.be/docs/next/NeXTStep/3.3/nd/OperatingSystem/Part3_DriverKit/Concepts/1_Overview/Overview.htmld/

They are not alone, as of Android N, Google is making it pretty 
clear that if one tries to circuvent the constrained set of NDK 
APIs and workaround the JNI to
access existing shared objects, the application will be simply be 
killed.


http://android-developers.blogspot.de/2016/06/android-changes-for-ndk-developers.html

Which basically boils down to OEMs, 3D rendering and low



Just like C developers that used to bash C++, now have to 
accept the two biggest C compilers are written in the language 
they love to hate.


There was a thread on reddit recently where some Microsoft 
employees admitted that parts of Windows now is implemented in 
C++ and C#, IIRC. I believe it is parts that run in user mode 
as separate processes, but still...


Yes, the trend started with Windows 8 and the new application 
model based on the initial design of COM+ Runtime, which was the 
genesis of .NET before they decided to ditch it for the CLR.


If you check the latest BUILD, the current approach being 
evangelised is .NET Native for 90% of the code, C++/CX or plain 
C++ with WRL for glueing to low level code until C# gets the 
missing features from System C#, and C++ for everything else.


On the UWP model, DirectX is probably the only user space API 
that doesn't have a WinRT projection fully available, but they 
have been slowly surfacing it in each release.


The WinRT, User Driver Framework, the new container model and 
Linux subsystem, the Checked C, input to the C++ Core Guidelines 
and new C# features all trace back to the MSR work in 
Singularity, Midori and Drawbridge.





Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread Steven Schveighoffer via Digitalmars-d

On 7/11/16 1:50 PM, deadalnix wrote:

On Friday, 8 July 2016 at 19:26:59 UTC, Andrei Alexandrescu wrote:

On 07/08/2016 02:42 PM, deadalnix wrote:

It is meaningless because sometime, you have A and B that are both safe
on their own, but doing both is unsafe. In which case A or B need to be
banned, but nothing allows to know which one. This isn't a bug, this is
a failure to have a principled approach to safety.


What would be a good example? Is there a bug report for it?



For instance:

@safe
int foo(int *iPtr) {
return *iPtr;
}

@safe
int bar(int[] iSlice) {
return foo(iSlice.ptr);
}

foo assume that creating an invalid pointer is not safe, while bar
assume that .ptr is safe as it doesn't access memory. If the slice's
size is 0, that is not safe.


That was reported and being worked on:

https://github.com/dlang/dmd/pull/5860

-Steve


Re: Vision for the D language - stabilizing complexity?

2016-07-11 Thread deadalnix via Digitalmars-d

On Friday, 8 July 2016 at 19:26:59 UTC, Andrei Alexandrescu wrote:

On 07/08/2016 02:42 PM, deadalnix wrote:
It is meaningless because sometime, you have A and B that are 
both safe
on their own, but doing both is unsafe. In which case A or B 
need to be
banned, but nothing allows to know which one. This isn't a 
bug, this is

a failure to have a principled approach to safety.


What would be a good example? Is there a bug report for it?



For instance:

@safe
int foo(int *iPtr) {
return *iPtr;
}

@safe
int bar(int[] iSlice) {
return foo(iSlice.ptr);
}

foo assume that creating an invalid pointer is not safe, while 
bar assume that .ptr is safe as it doesn't access memory. If the 
slice's size is 0, that is not safe.


This is one such case where each of this operation is safe 
granted some preconditions, but violate each other's 
preconditions so using both is unsafe.


The position is inconsistent because the dictatorship refuses 
to
compromise on mutually exclusive goals. For instance, @safe is 
defined
as ensuring memory safety. But not against undefined behaviors 
(in fact
Walter promote the use of UB in various situations, for 
instance when it
comes to shared). You CANNOT have undefined behavior that are 
defined as

being memory safe.


I agree with that. What would be a good example? Where is the 
reference to Walter's promotion of UB in @safe code?




I don't have a specific reference to point to right now. However, 
there have been several event of "@safe guarantee memory safety, 
it doesn't protect against X" while X is undefined behavior most 
of the time.




Re: Go's march to low-latency GC

2016-07-11 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 11 July 2016 at 17:14:17 UTC, jmh530 wrote:

On Monday, 11 July 2016 at 13:13:02 UTC, Patrick Schluter wrote:


Because of attitudes like shown in that thread
https://forum.dlang.org/post/ilbmfvywzktilhskp...@forum.dlang.org
people who do not really understand why 32 bit systems are a 
really problematic even if the apps don't use more than 2 GiB 
of memory.


Here's Linus Torvalds classic rant about 64 bit
https://cl4ssic4l.wordpress.com/2011/05/24/linus-torvalds-about-pae/  (it's 
more about PAE but the reasons why 64 bits is a good thing in general are the 
same: address space!)


Why can't you use both 32bit and 64bit pointers when compiling 
for x86_64?


My guess would be that using 64bit registers precludes the use 
of 32bit registers.


You can, but OSes usually give you randomized memory layout as a 
security measure.




Re: Go's march to low-latency GC

2016-07-11 Thread jmh530 via Digitalmars-d

On Monday, 11 July 2016 at 13:13:02 UTC, Patrick Schluter wrote:


Because of attitudes like shown in that thread
https://forum.dlang.org/post/ilbmfvywzktilhskp...@forum.dlang.org
people who do not really understand why 32 bit systems are a 
really problematic even if the apps don't use more than 2 GiB 
of memory.


Here's Linus Torvalds classic rant about 64 bit
https://cl4ssic4l.wordpress.com/2011/05/24/linus-torvalds-about-pae/  (it's 
more about PAE but the reasons why 64 bits is a good thing in general are the 
same: address space!)


Why can't you use both 32bit and 64bit pointers when compiling 
for x86_64?


My guess would be that using 64bit registers precludes the use of 
32bit registers.


Re: Rant after trying Rust a bit

2016-07-11 Thread Max Samukha via Digitalmars-d

On Thursday, 6 August 2015 at 06:54:45 UTC, Walter Bright wrote:

On 8/3/2015 2:19 AM, Max Samukha wrote:
The point is that '+' for string concatenation is no more of 
an 'idiot thing'

than '~'.


Sure it is. What if you've got:

   T add(T)(T a, T b) { return a + b; }

and some idiot overloaded + for T to be something other than 
addition?


That is a general problem with structural typing. Why not assume 
that if a type defines 'length', it must be a range? Then call an 
idiot everyone who defines it otherwise. I admit that special 
treatment of '+' is justified by its long history, but 'idiot 
thing' is obviously out of place.


BTW, it happens that '+' does not always have to be commutative: 
https://en.wikipedia.org/wiki/Near-ring

https://en.wikipedia.org/wiki/Ordinal_arithmetic#Addition




Re: D is crap

2016-07-11 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 11 July 2016 at 16:26:11 UTC, Paulo Pinto wrote:

Happy not to disappoint.  :)


You never disappoint in the GC department ;-)

OS vendors are the ones that eventually decided what is a 
systems programming language on their OSes.


To a large extent on Apple and Microsoft OSes. Not so much on 
open source OSes as you are not tied down to binary blobs.


And if they say so, like Apple is nowadays doing with Swift, 
developers will have no option other than accept it or move to 
other platform, regardless of their opinion what features a 
systems programming languages should offer.


It is true that there have been policy changes which makes it 
difficult to access features like GPU and Audio on OS-X/iOS 
without touching Objective-C or Swift. You don't have to use it 
much, but you need some binding stubs in Objective-C or 
Objective-C++ if you want to be forward compatible (i.e. link on 
future versions of the OS without recompiling).


But I _have_ noticed that Apple increasingly is making low level 
setup only available through Objective-C/Swift. It is probably a 
lock-in strategy to raise porting costs to Android.


Just like C developers that used to bash C++, now have to 
accept the two biggest C compilers are written in the language 
they love to hate.


There was a thread on reddit recently where some Microsoft 
employees admitted that parts of Windows now is implemented in 
C++ and C#, IIRC. I believe it is parts that run in user mode as 
separate processes, but still...




Re: faster splitter

2016-07-11 Thread Henrique bucher via Digitalmars-d

On Monday, 30 May 2016 at 18:20:39 UTC, Andrei Alexandrescu wrote:

On 05/30/2016 05:31 AM, qznc wrote:

On Sunday, 29 May 2016 at 21:07:21 UTC, qznc wrote:
worthwhile to use word loads [0] instead. Really fancy would 
be SSE.


I wrote a splitter in SSE4.2 some time ago as  acontribution to a 
github project. Perhaps it is related.


https://github.com/HFTrader/string-splitting/blob/master/splithb2.cpp

Cheers,


Re: D is crap

2016-07-11 Thread Guillaume Piolat via Digitalmars-d

On Monday, 11 July 2016 at 14:12:35 UTC, Chris wrote:


You focus on a small niche where people use all kinds of 
performance tricks even in C and C++. A lot of software doesn't 
care about GC overheads, however, and without GC a lot of 
people wouldn't even have considered it.




+1
A large majority of performance-heavy software can live with the 
GC.
GC is a blocker for people using micro-controllers with little 
memory, that usually don't get to choose a compiler.





Re: D is crap

2016-07-11 Thread Paulo Pinto via Digitalmars-d
On Monday, 11 July 2016 at 14:58:16 UTC, Ola Fosheim Grøstad 
wrote:

On Monday, 11 July 2016 at 14:45:56 UTC, Paulo Pinto wrote:
The biggest problem with D isn't the GC, is lack of focus to 
make it stand out versus .NET Native, Swift, Rust, Ada, SPARK, 
Java, C++17.


I knew you would chime in... Neither .NET, Swift or Java should 
be considered system level tools. Ada/Spark has a very narrow 
use case. Rust is still in it's infancy. C++17 is not yet 
finished. But yes, C++ currently owns system level programming, 
C is loosing terrain and Rust has an uncertain future.


The biggest problem with D is not GC, because we now how @nogc. 
But D is still lacking in memory management.


Happy not to disappoint.  :)

OS vendors are the ones that eventually decided what is a systems 
programming language on their OSes.


And if they say so, like Apple is nowadays doing with Swift, 
developers will have no option other than accept it or move to 
other platform, regardless of their opinion what features a 
systems programming languages should offer.


Just like C developers that used to bash C++, now have to accept 
the two biggest C compilers are written in the language they love 
to hate.





Re: Go's march to low-latency GC

2016-07-11 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 11 July 2016 at 13:05:09 UTC, Russel Winder wrote:

Maybe because they are developing a language for the 1980s?

;-)


It is quite common for web services to run with less than 1GB. 
64bit would be very wasteful.




Re: D is crap

2016-07-11 Thread Paolo Invernizzi via Digitalmars-d

On Monday, 11 July 2016 at 14:45:56 UTC, Paulo Pinto wrote:


The biggest problem with D isn't the GC, is lack of focus to 
make it stand out versus .NET Native, Swift, Rust, Ada, SPARK, 
Java, C++17.


How true!
That's the only real problem with this beautiful language!

/P




Re: D is crap

2016-07-11 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 11 July 2016 at 14:12:35 UTC, Chris wrote:
Most certainly from a multi-purpose language. GC would have 
been demanded sooner or later. The mistake was not to make it 
optional from the beginning.


If D was designed as a high level language then it would be a 
mistake not to provide a GC in most scenarios. Yes.


care about GC overheads, however, and without GC a lot of 
people wouldn't even have considered it.


Lots of people have been happy with Perl and Python before they 
added GC to catch cycles... Most applications don't leak a lot of 
memory to cyclic references and they usually have to run for a 
while. (But constructing a worst case is easy, of course.)


(Btw, didn't mean to say that autorelease pools are the same as a 
region allocator, but they are similar in spirit.)



Go ahead, I'm sure it's fun. ;)


Oh, I didn't mean to say I have designed a language. I have many 
ideas and sketches, but far too many to implement and polish ;-).


I have started extending my knowledge on type systems, though, 
quite interesting. I think the change in computing power we now 
have is opening up for many new interesting opportunities.





Re: D is crap

2016-07-11 Thread Chris via Digitalmars-d

On Monday, 11 July 2016 at 14:03:36 UTC, Infiltrator wrote:

On Monday, 11 July 2016 at 13:24:14 UTC, Chris wrote:

...
To have GC was definitely a good decision. What was not so
good was that it was not optional with a simple on/off switch.
...


I know that I'm missing something here, but what's wrong with 
the functions provided in core.memory?  Specifically, 
GC.disable()?


I was thinking of a compiler switch (as they did in Objective-C), 
and had D been designed with `-nogc` in mind from the start, 
Phobos would be GC free too. No GC is still a bit rough around 
the edges.


Re: D is crap

2016-07-11 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 11 July 2016 at 14:45:56 UTC, Paulo Pinto wrote:
The biggest problem with D isn't the GC, is lack of focus to 
make it stand out versus .NET Native, Swift, Rust, Ada, SPARK, 
Java, C++17.


I knew you would chime in... Neither .NET, Swift or Java should 
be considered system level tools. Ada/Spark has a very narrow use 
case. Rust is still in it's infancy. C++17 is not yet finished. 
But yes, C++ currently owns system level programming, C is 
loosing terrain and Rust has an uncertain future.


The biggest problem with D is not GC, because we now how @nogc. 
But D is still lacking in memory management.




Re: Card on fire

2016-07-11 Thread Meta via Digitalmars-d
On Sunday, 10 July 2016 at 02:43:42 UTC, Andrei Alexandrescu 
wrote:
Got a text from Walter - his famous fanless graphics card 
caught fire along with the motherboard. He'll be outta 
commission for a few days. -- Andrei


It wouldn't happen to be an nVidia card, would it?


Re: D is crap

2016-07-11 Thread Paulo Pinto via Digitalmars-d
On Monday, 11 July 2016 at 14:02:09 UTC, Ola Fosheim Grøstad 
wrote:

On Monday, 11 July 2016 at 13:24:14 UTC, Chris wrote:
I bet you that if D hadn't had GC when it first came out, 
people would've mentioned manual memory management as a reason 
not to use GC. I never claimed that D was _propelled_ by GC, 
but that it was a feature that most users would expect. Not 
having it would probably have done more harm than having it.


Actually, I am certain that GC is a feature that _nobody_ would 
expect from a system level language, outside the Go-crowd.




I am no longer dabbling in D, but could not resist:

- UK Royal Navy with Algol 68 RS

- Xerox PARC with Mesa/Cedar

- DEC/Olivetti/Compaq with Modula-3

- ETHZ with Oberon, Oberon-2, Active Oberon, Component Pascal

- Microsoft with Spec#, System C# and the upcoming .NET Native C# 
7.0+ features
 (http://joeduffyblog.com/2015/12/19/safe-native-code/, 
https://www.infoq.com/news/2016/06/systems-programming-qcon)


- Astrobe with Oberon for micro-controlers (ARM Cortex-M4, 
Cortex-M3 and

Xilinx FPGA Systems)

- PTC Perc Ultra with Java

- IS2T with their MicroEJ OS Java/C platform


The biggest problem with D isn't the GC, is lack of focus to make 
it stand out versus .NET Native, Swift, Rust, Ada, SPARK, Java, 
C++17.






Re: D is crap

2016-07-11 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 11 July 2016 at 14:19:07 UTC, ketmar wrote:
On Monday, 11 July 2016 at 14:02:09 UTC, Ola Fosheim Grøstad 
wrote:
Actually, I am certain that GC is a feature that _nobody_ 
would expect from a system level language, outside the 
Go-crowd.


hello. i am the man born to ruin your world.


Of course, you are the extra 1% that comes on top of the other 
100%.




Re: D is crap

2016-07-11 Thread ketmar via Digitalmars-d
On Monday, 11 July 2016 at 13:56:30 UTC, Ola Fosheim Grøstad 
wrote:

On Monday, 11 July 2016 at 12:18:26 UTC, ketmar wrote:
and most of those people never even started to use D. took a 
brief look, maybe wrote "helloworld", and that's all. it


Where do you get this from?


from reading this NG and other parts of teh internets.


Quite a few D programmers have gone to C++ and Rust.


quite a few people who tried D... and anyway, the reasons were 
more complex, and GC usually just a nice excuse.


 C is primarily used for portability/system support/interfacing 
or because you have an existing codebase.


this is mostly what "i can't stand GC" people want to do.

C is increasingly becoming a marginal language (narrow 
application area).


'cause manual memory management is PITA. not only due to this, of 
course, but this is still something.


Re: implicit conversions to/from shared

2016-07-11 Thread ag0aep6g via Digitalmars-d

On 07/11/2016 03:23 PM, ag0aep6g wrote:

I think I would prefer if the compiler would generate atomic operations,


Backpedaling on that one.

With automatic atomic loads and stores, one could accidentally write this:

shared int x;
x = x + 1; /* atomic load + atomic != atomic increment */

Easy to miss the problem, because the code looks so innocent.

But when the atomic loads and stores must be spelled out it would look 
like in my original post:


shared int x;
atomicStore(x, atomicLoad(x) + 1);

Way more obvious that the code isn't actually thread-safe.

So now I'm leaning towards requiring the verbose version.


Re: D is crap

2016-07-11 Thread ketmar via Digitalmars-d
On Monday, 11 July 2016 at 14:02:09 UTC, Ola Fosheim Grøstad 
wrote:
Actually, I am certain that GC is a feature that _nobody_ would 
expect from a system level language, outside the Go-crowd.


hello. i am the man born to ruin your world.


Re: D is crap

2016-07-11 Thread Chris via Digitalmars-d
On Monday, 11 July 2016 at 14:02:09 UTC, Ola Fosheim Grøstad 
wrote:

On Monday, 11 July 2016 at 13:24:14 UTC, Chris wrote:
I bet you that if D hadn't had GC when it first came out, 
people would've mentioned manual memory management as a reason 
not to use GC. I never claimed that D was _propelled_ by GC, 
but that it was a feature that most users would expect. Not 
having it would probably have done more harm than having it.


Actually, I am certain that GC is a feature that _nobody_ would 
expect from a system level language, outside the Go-crowd.


Most certainly from a multi-purpose language. GC would have been 
demanded sooner or later. The mistake was not to make it optional 
from the beginning.


You focus on a small niche where people use all kinds of 
performance tricks even in C and C++. A lot of software doesn't 
care about GC overheads, however, and without GC a lot of people 
wouldn't even have considered it.


By the way, have you ever designed a language, I'd love to see 
how it would look like ;)


Most programmers have designed DSL, so yes, obviously. If you 
are talking about a general purpose language then I wouldn't 
want to announce it until I was certain I got the basics right, 
like memory management.


Go ahead, I'm sure it's fun. ;)


Re: D is crap

2016-07-11 Thread Infiltrator via Digitalmars-d

On Monday, 11 July 2016 at 13:24:14 UTC, Chris wrote:

...
To have GC was definitely a good decision. What was not so
good was that it was not optional with a simple on/off switch.
...


I know that I'm missing something here, but what's wrong with the 
functions provided in core.memory?  Specifically, GC.disable()?


Re: D is crap

2016-07-11 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 11 July 2016 at 13:24:14 UTC, Chris wrote:
I bet you that if D hadn't had GC when it first came out, 
people would've mentioned manual memory management as a reason 
not to use GC. I never claimed that D was _propelled_ by GC, 
but that it was a feature that most users would expect. Not 
having it would probably have done more harm than having it.


Actually, I am certain that GC is a feature that _nobody_ would 
expect from a system level language, outside the Go-crowd.


By the way, have you ever designed a language, I'd love to see 
how it would look like ;)


Most programmers have designed DSL, so yes, obviously. If you are 
talking about a general purpose language then I wouldn't want to 
announce it until I was certain I got the basics right, like 
memory management.




Re: D is crap

2016-07-11 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 11 July 2016 at 12:18:26 UTC, ketmar wrote:
and most of those people never even started to use D. took a 
brief look, maybe wrote "helloworld", and that's all. it


Where do you get this from? Quite a few D programmers have gone 
to C++ and Rust.


D *can* be used without GC. and it will still be "better C". it 
still will be less painful than C, but this is the price of 
doing "low-level things".


 C is primarily used for portability/system support/interfacing 
or because you have an existing codebase. Even Microsoft's is now 
using higher level languages than C in parts of their system 
level code (operating system).


Btw, C has changed quite a bit, it is at C11 now and even have 
"generics"... but I doubt many will us it. C is increasingly 
becoming a marginal language (narrow application area).


Re: implicit conversions to/from shared

2016-07-11 Thread ag0aep6g via Digitalmars-d

On 07/11/2016 03:31 PM, Kagamin wrote:

Atomic loads are only needed for volatile variables, not for all kinds
of shared data.


Volatile just means that another thread can mess with the data, right? 
So shared data that's not being written to from elsewhere isn't 
volatile, and one doesn't need an atomic load to read it.


If I got that right: Sure. But the compiler can't know if a shared 
variable is volatile or not, so it has to assume that it is. If the 
programmer knows that it's not volatile, they can cast shared away and 
use a normal load.



Also currently atomicLoad doesn't provide functionality
equivalent to raw load.


Is a "raw load" just a non-atomic load, or is it something special?
What's the relevance of atomicLoad's capabilities?


Generating atomic operations would break less code, and feels like the
obvious thing to me.


Multithreaded code can't be generated.


For primitive types, atomic loads and stores can be generated, no? It's 
clear that this doesn't make the code automatically thread-safe. It just 
guards against an easily made mistake. Like shared is supposed to do, as 
far as I understand.


Re: exceptions vs error codes

2016-07-11 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 11 July 2016 at 13:20:50 UTC, Walter Bright wrote:

On 7/10/2016 10:21 PM, Ola Fosheim Grøstad wrote:

On Monday, 11 July 2016 at 04:27:35 UTC, Walter Bright wrote:
When has D ever done anything right in your eyes? Just 
curious!

[ ... more complaints about D ... ]


So what's stopping you from using Scheme?


Nothing is stopping me from using Lisp where appropriate. Why are 
you asking?


Re: implicit conversions to/from shared

2016-07-11 Thread Kagamin via Digitalmars-d

On Monday, 11 July 2016 at 05:26:42 UTC, ag0aep6g wrote:
Simply disallow reading and writing shared then, forcing 
something more explicit like atomicLoad/atomicStore?


That would be better than the current state, but it would make 
shared even more unwieldy.


Atomic loads are only needed for volatile variables, not for all 
kinds of shared data. Also currently atomicLoad doesn't provide 
functionality equivalent to raw load.


Generating atomic operations would break less code, and feels 
like the obvious thing to me.


Multithreaded code can't be generated.


Re: D is crap

2016-07-11 Thread Chris via Digitalmars-d
I bet you that if D hadn't had GC when it first came out, 
people would've mentioned manual memory management as a reason 
not to use GC. I never claimed that D was _propelled_ by GC, 
but that it was a feature that most users would expect. Not 
having it would probably have done more harm than having it.


By the way, have you ever designed a language, I'd love to see 
how it would look like ;)


[snip]


s/not to use GC/not to use D


Re: D is crap

2016-07-11 Thread Chris via Digitalmars-d
On Monday, 11 July 2016 at 11:59:51 UTC, Ola Fosheim Grøstad 
wrote:

On Monday, 11 July 2016 at 09:30:37 UTC, Chris wrote:
Lisp or SmallTalk)[1]. D couldn't have afforded not to have GC 
when it first came out. It was expected of a (new) language to 
provide GC by then - and GC had become a selling point for new 
languages.


This is not true, it is just wishful thinking. D was harmed by 
the GC, not propelled by it. I am not missing any point, sorry. 
Just go look at what people who gave up on D claim to be a 
major reason, the GC scores high...



No. Having GC attracts more users, because they either explicitly 
want it of they don't care for the overhead. To have GC was 
definitely a good decision. What was not so good was that it was 
not optional with a simple on/off switch. Neither was it a good 
idea not to spend more time on ways to optimize GC, so it was 
comparatively slow.


Keep in mind that the no GC crowd has very specialized needs 
(games, real time systems). Then again, to win this crowd over 
from C/C++ is not easy, regardless. And ... let's not forget that 
GC is often used as a handy excuse not to use D. "You don't use D 
because of a, b, c or because of GC?" - "Yeah, that one."


I bet you that if D hadn't had GC when it first came out, people 
would've mentioned manual memory management as a reason not to 
use GC. I never claimed that D was _propelled_ by GC, but that it 
was a feature that most users would expect. Not having it would 
probably have done more harm than having it.


By the way, have you ever designed a language, I'd love to see 
how it would look like ;)


[snip]


Re: implicit conversions to/from shared

2016-07-11 Thread ag0aep6g via Digitalmars-d

On 07/11/2016 02:54 PM, Steven Schveighoffer wrote:

I think you misunderstand the problem here.


Yes.


Conversion means changing
the type.

Once you have loaded the shared data into a register, or whatever, it's
no longer shared, it's local. Writing it out to another place doesn't
change anything. It's once you add references into the mix where you may
have a problem.


Right.


What I think you mean (and I think you realize this now), is that the
actual copying of the data should not be implicitly allowed. The type
change is fine, it's the physical reading or writing of shared data that
can cause issues. I agree we should extend the rules to prevent this.


Exactly.


In other words:

shared int x;

void main()
{
// ++x; // not allowed
int x2 = x + 1; // but this is
x = x2; // and it shouldn't be
}


I think I would prefer if the compiler would generate atomic operations, 
but I'm clearly far from being an expert on any of this. Simply 
rejecting the code would be fine with me, too.


Also:

shared int x;
shared int y;
x = y; /* should be rejected too (or be atomic, if that's even possible) */


Re: exceptions vs error codes

2016-07-11 Thread Walter Bright via Digitalmars-d

On 7/10/2016 10:21 PM, Ola Fosheim Grøstad wrote:

On Monday, 11 July 2016 at 04:27:35 UTC, Walter Bright wrote:

When has D ever done anything right in your eyes? Just curious!

[ ... more complaints about D ... ]


So what's stopping you from using Scheme?



Re: Go's march to low-latency GC

2016-07-11 Thread Patrick Schluter via Digitalmars-d

On Monday, 11 July 2016 at 12:21:04 UTC, Sergey Podobry wrote:

On Monday, 11 July 2016 at 11:23:26 UTC, Dicebot wrote:

On Sunday, 10 July 2016 at 19:49:11 UTC, Sergey Podobry wrote:


Remember that virtual address space is limited on 32-bit 
platforms. Thus spawning 2000 threads 1 MB stack each will 
occupy all available VA space and you'll get an allocation 
failure (even if the real memory usage is low).


Sorry, but someone who tries to run highly concurrent server 
software with thousands of fibers on 32-bit platform is quite 
unwise and there is no point in taking such use case into 
account. 32-bit has its own niche with different kinds of 
concerns.


Agreed. I don't know why golang guys bother about it.


Because of attitudes like shown in that thread
https://forum.dlang.org/post/ilbmfvywzktilhskp...@forum.dlang.org
people who do not really understand why 32 bit systems are a 
really problematic even if the apps don't use more than 2 GiB of 
memory.


Here's Linus Torvalds classic rant about 64 bit
https://cl4ssic4l.wordpress.com/2011/05/24/linus-torvalds-about-pae/  (it's 
more about PAE but the reasons why 64 bits is a good thing in general are the 
same: address space!)


Re: exceptions vs error codes

2016-07-11 Thread Walter Bright via Digitalmars-d

On 7/9/2016 8:02 PM, Superstar64 wrote:

Would it be possible and a good idea to have a language feature that
allows some exceptions to use error code code generation.


If you want to return an error code, return an error code. No language 
feature is required.


Re: Card on fire

2016-07-11 Thread Steven Schveighoffer via Digitalmars-d

On 7/9/16 10:43 PM, Andrei Alexandrescu wrote:

Got a text from Walter - his famous fanless graphics card caught fire
along with the motherboard. He'll be outta commission for a few days. --


That's kind of scary. It's one of those things you don't think about 
happening -- like what if you weren't home if this happened.


Could have been a lot worse.

-Steve


Re: Go's march to low-latency GC

2016-07-11 Thread Russel Winder via Digitalmars-d
On Mon, 2016-07-11 at 12:21 +, Sergey Podobry via Digitalmars-d
wrote:
> On Monday, 11 July 2016 at 11:23:26 UTC, Dicebot wrote:
> > On Sunday, 10 July 2016 at 19:49:11 UTC, Sergey Podobry wrote:
> > > 
> > > Remember that virtual address space is limited on 32-bit 
> > > platforms. Thus spawning 2000 threads 1 MB stack each will 
> > > occupy all available VA space and you'll get an allocation 
> > > failure (even if the real memory usage is low).
> > 
> > Sorry, but someone who tries to run highly concurrent server 
> > software with thousands of fibers on 32-bit platform is quite 
> > unwise and there is no point in taking such use case into 
> > account. 32-bit has its own niche with different kinds of 
> > concerns.
> 
> Agreed. I don't know why golang guys bother about it.

Maybe because they are developing a language for the 1980s?

;-)

-- 

Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder

signature.asc
Description: This is a digitally signed message part


Re: exceptions vs error codes

2016-07-11 Thread Steven Schveighoffer via Digitalmars-d

On 7/9/16 11:02 PM, Superstar64 wrote:

In terms of performance and code generation exceptions are faster in the
regular path while error codes are faster in the error path.

Would it be possible and a good idea to have a language feature that
allows some exceptions to use error code code generation.


Swift does exactly this. It doesn't have exceptions, but mimics them by 
passing a hidden "error" parameter that is filled in if an error occurs. 
No stack unwinding occurs, just normal returns.


This allows them to get around stack unwinding issues with ARC.

I don't think this is possible for D to switch to. It would break too 
much code. You have to instrument your code properly to get this 
behavior, and it's actually quite unpleasant IMO.


-Steve


Re: implicit conversions to/from shared

2016-07-11 Thread Steven Schveighoffer via Digitalmars-d

On 7/10/16 9:02 AM, ag0aep6g wrote:

While messing with atomicLoad [1], I noticed that dmd lets me implicitly
convert values to/from shared without restrictions. It's in the spec
[2]. This seems bad to me.


I think you misunderstand the problem here. Conversion means changing 
the type.


Once you have loaded the shared data into a register, or whatever, it's 
no longer shared, it's local. Writing it out to another place doesn't 
change anything. It's once you add references into the mix where you may 
have a problem.


What I think you mean (and I think you realize this now), is that the 
actual copying of the data should not be implicitly allowed. The type 
change is fine, it's the physical reading or writing of shared data that 
can cause issues. I agree we should extend the rules to prevent this.


In other words:

shared int x;

void main()
{
   // ++x; // not allowed
   int x2 = x + 1; // but this is
   x = x2; // and it shouldn't be
}

-Steve


Re: D is crap

2016-07-11 Thread Luís Marques via Digitalmars-d

On Sunday, 10 July 2016 at 18:53:52 UTC, Jacob Carlborg wrote:
On OS X when an application segfaults a crash report will be 
generated. It's available in the Console application.


Doesn't seem to work for me on 10.11.5. Maybe you need to enable 
that on the latest OSes? In any case, that will probably get you 
a mangled stack trace, right? It would still be useful 
(especially if the stack trace if correct, in LLDB I get some 
crappy ones sometimes) but it would not be as convenient as the 
stack trace on Windows generated by the druntime.


Re: Go's march to low-latency GC

2016-07-11 Thread Sergey Podobry via Digitalmars-d

On Monday, 11 July 2016 at 11:23:26 UTC, Dicebot wrote:

On Sunday, 10 July 2016 at 19:49:11 UTC, Sergey Podobry wrote:


Remember that virtual address space is limited on 32-bit 
platforms. Thus spawning 2000 threads 1 MB stack each will 
occupy all available VA space and you'll get an allocation 
failure (even if the real memory usage is low).


Sorry, but someone who tries to run highly concurrent server 
software with thousands of fibers on 32-bit platform is quite 
unwise and there is no point in taking such use case into 
account. 32-bit has its own niche with different kinds of 
concerns.


Agreed. I don't know why golang guys bother about it.


Re: D is crap

2016-07-11 Thread ketmar via Digitalmars-d
On Monday, 11 July 2016 at 11:59:51 UTC, Ola Fosheim Grøstad 
wrote:
Just go look at what people who gave up on D claim to be a 
major reason, the GC scores high...


and most of those people never even started to use D. took a 
brief look, maybe wrote "helloworld", and that's all. it doesn't 
matter in this case which reason made 'em "turn away". if not GC, 
it would be something another: they just wanted their Ideal 
Lanugage, and found that D is not. those people just can't be 
satisfied, 'cause they are looking for something D isn't at all.


D *can* be used without GC. and it will still be "better C". it 
still will be less painful than C, but this is the price of doing 
"low-level things". or it can be used on a much higher level, 
where GC doesn't really matter anymore (and actually desirable).


Re: D is crap

2016-07-11 Thread Luís Marques via Digitalmars-d

On Saturday, 9 July 2016 at 08:40:00 UTC, Walter Bright wrote:

On 7/8/2016 2:36 PM, Luís Marques wrote:

On Friday, 8 July 2016 at 21:26:19 UTC, Walter Bright wrote:
Only on Windows, and that's a common source of frustration 
for me :(


Linux too.


Not by default, right?


-g


Well, it doesn't work for me on Linux with the latest DMD, even 
with -g.
To be clear, the whole context was "Not by default, right? Only 
with the magic import and call."


Re: D is crap

2016-07-11 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 11 July 2016 at 08:55:06 UTC, ketmar wrote:
On Monday, 11 July 2016 at 08:45:21 UTC, Ola Fosheim Grøstad 
wrote:

On Monday, 11 July 2016 at 08:43:03 UTC, ketmar wrote:
On Monday, 11 July 2016 at 07:16:57 UTC, Ola Fosheim Grøstad 
wrote:

There aren't many people you trust then...

exactly. 99% of people are idiots.


100%


it depends of rounding mode.


101%


Re: D is crap

2016-07-11 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 11 July 2016 at 09:30:37 UTC, Chris wrote:
Lisp or SmallTalk)[1]. D couldn't have afforded not to have GC 
when it first came out. It was expected of a (new) language to 
provide GC by then - and GC had become a selling point for new 
languages.


This is not true, it is just wishful thinking. D was harmed by 
the GC, not propelled by it. I am not missing any point, sorry. 
Just go look at what people who gave up on D claim to be a major 
reason, the GC scores high...



It wasn't demanding. I wrote a lot of code in Objective-C and 
it was perfectly doable.


Of course it was doable, but developers had trouble getting it 
right. In Objective-C Foundation you have to memorize what kind 
of ownership functions return. A responsibility which ARC is 
relieving the developer from. Autorelease-pools does not change 
that, and you have to take special measures to avoid running out 
of memory with autorelease pools as it is a very simple 
region-allocator (what Walter calls a bump--allocator) so 
autorelease pools are not a generic solution.


Objective-C had a very primitive manual RC solution that relied 
on conventions. They added a GC and ARC and only kept ARC. As 
simple as that.


C++ actually has much robust memory management that what 
Objective-C had.




Re: Card on fire

2016-07-11 Thread Mike James via Digitalmars-d
On Sunday, 10 July 2016 at 02:43:42 UTC, Andrei Alexandrescu 
wrote:
Got a text from Walter - his famous fanless graphics card 
caught fire along with the motherboard. He'll be outta 
commission for a few days. -- Andrei


I hope he wasn't using it on the back of a camel in the desert...

http://forum.dlang.org/post/nl5hqu$1atc$1...@digitalmars.com



Re: Go's march to low-latency GC

2016-07-11 Thread Dicebot via Digitalmars-d

On Sunday, 10 July 2016 at 19:49:11 UTC, Sergey Podobry wrote:

On Saturday, 9 July 2016 at 13:48:41 UTC, Dicebot wrote:
Nope, this is exactly the point. You can demand crazy 10 MB of 
stack for each fiber and only the actually used part will be 
allocated by kernel.


Remember that virtual address space is limited on 32-bit 
platforms. Thus spawning 2000 threads 1 MB stack each will 
occupy all available VA space and you'll get an allocation 
failure (even if the real memory usage is low).


Sorry, but someone who tries to run highly concurrent server 
software with thousands of fibers on 32-bit platform is quite 
unwise and there is no point in taking such use case into 
account. 32-bit has its own niche with different kinds of 
concerns.


Re: D is crap

2016-07-11 Thread Chris via Digitalmars-d
On Sunday, 10 July 2016 at 03:25:16 UTC, Ola Fosheim Grøstad 
wrote:


Just like there is no C++ book that does not rant about how 
great RAII is... What do you expect from a language evangelic? 
The first Java implementation Hotspot inherited its technology 
from StrongTalk, a Smalltalk successor. It was not a Java 
phenomenon, and FWIW both Lisp, Simula and Algol68 were garbage 
collected.


Please stop intentionally missing the point. I don't care if 
Leonardo Da Vinci already had invented GC - which wouldn't 
surprise me - but this is not the point. My point is that GC 
became a big thing in the late 90ies early 2000s which is in part 
owed to Java having become the religion of the day (not Lisp or 
SmallTalk)[1]. D couldn't have afforded not to have GC when it 
first came out. It was expected of a (new) language to provide GC 
by then - and GC had become a selling point for new languages.


[1] And of course computers had become more powerful and could 
handle the overhead of GC better than in the 80ies.


What was "new" with Java was compile-once-run-everywhere. 
Although, that wasn't new either, but it was at least 
marketable as new.


Java was the main catalyst for GC - or at least for people 
demanding it. Practically everybody who had gone through IT 
courses, college etc. with Java (and there were loads) wanted 
GC. It was a given for many people.


Well, yes, of course Java being used in universities created a 
demand for Java and similar languages. But GC languages were 
extensively used in universities before Java.


Yes, it didn't last long. But the fact that they bothered to 
introduce it, shows you how big GC was/is.


No, it shows how demanding manual reference counting was in 
Objective-C on regular programmers. GC is the first go to 
solution for easy memory management, and has been so since the 
60s. Most high level languages use garbage collection.


It wasn't demanding. I wrote a lot of code in Objective-C and it 
was perfectly doable. You even have features like `autorelease` 
for return values. The thing is that Apple had become an 
increasingly popular platform and more and more programmers were 
writing code for OS X. So they thought, they'd make it easier and 
reduce potential memory leaks (introduced by not so experienced 
Objective-C coders) by adding GC, especially because a lot of 
programmers expected GC "in this day and age".


Re: D is crap

2016-07-11 Thread ketmar via Digitalmars-d
On Monday, 11 July 2016 at 08:45:21 UTC, Ola Fosheim Grøstad 
wrote:

On Monday, 11 July 2016 at 08:43:03 UTC, ketmar wrote:
On Monday, 11 July 2016 at 07:16:57 UTC, Ola Fosheim Grøstad 
wrote:

There aren't many people you trust then...

exactly. 99% of people are idiots.


100%


it depends of rounding mode.


Re: exceptions vs error codes

2016-07-11 Thread ketmar via Digitalmars-d

On Sunday, 10 July 2016 at 21:31:57 UTC, Chris Wright wrote:

On Sun, 10 Jul 2016 16:57:49 +, ketmar wrote:


On Sunday, 10 July 2016 at 16:47:31 UTC, Chris Wright wrote:
You do need a try/catch in every annotated function to catch 
runtime exceptions like OutOfMemoryError.


as a side note: it is even not guaranteed that one *can* catch 
Error. and it is plainly wrong to try to continue execution 
after that, as program is in undefined state.


Array bounds errors, then.


any Error. spec clearly says that throwing Error doesn't 
guarantee proper unwinding (and by that, it may skip as many 
catch blocks as it want), and program is in undefined state after 
catching Error.


in DMD, it just happened to be implemented in a way that it is 
possible to catch many errors, get unwinding, and such. but it 
isn't a requirement. and, by the way, by writing different 
spec-compliant implementation, one can break alot of unittests, 
as many unittests catching AssertError.


and no, i don't know how to write a reliable and spec-compliant 
unittest in D with `assert`s.


Re: D is crap

2016-07-11 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 11 July 2016 at 08:43:03 UTC, ketmar wrote:
On Monday, 11 July 2016 at 07:16:57 UTC, Ola Fosheim Grøstad 
wrote:

There aren't many people you trust then...

exactly. 99% of people are idiots.


100%



Re: D is crap

2016-07-11 Thread ketmar via Digitalmars-d
On Monday, 11 July 2016 at 07:16:57 UTC, Ola Fosheim Grøstad 
wrote:

There aren't many people you trust then...

exactly. 99% of people are idiots.


Re: Walter's Famous German Language Essentials Guide

2016-07-11 Thread //royalediting.com/possessive-nouns-essential-points via Digitalmars-d
Germany is a wonderful country. To travel there one should speak 
the German language at least a little. Using this 
http://royalediting.com/possessive-nouns-essential-points
you can use numerous tips and pieces of advices of German 
specialists.


Re: D is crap

2016-07-11 Thread Ola Fosheim Grøstad via Digitalmars-d

On Sunday, 10 July 2016 at 19:12:46 UTC, ketmar wrote:

then i won't trust a word they said.


There aren't many people you trust then... Seriously, in academic 
contexts a statement like «X is a garbage collected language» 
always means tracing. It would be very odd to assume that X used 
reference counting.