Re: If you needed any more evidence that memory safety is the future...

2017-02-24 Thread Moritz Maxeiner via Digitalmars-d

On Friday, 24 February 2017 at 06:59:16 UTC, Jack Stouffer wrote:

https://bugs.chromium.org/p/project-zero/issues/detail?id=1139

[...]


This isn't evidence that memory safety is "the future", though.
This is evidence that people do not follow basic engineering 
practices (for whatever seemingly valid reasons - such as a 
project deadline - at the time).
Writing a program (with manual memory management) that does not 
have dangerous memory issues is not an intrinsically hard task. 
It does, however, require you to *design* your program, not 
*grow* it (which, btw, is what a software *engineer* should do 
anyway).
Systems such as memory ownership+borrowing, garbage collection, 
(automatic) reference counting can mitigate the symptoms (and I 
happily use any or all of them when they are the best tool for 
the task at hand), but none of them will solve the real issue: 
The person in front of the screen (which includes you and me).


Re: code.dlang.org major outage

2017-02-23 Thread Moritz Maxeiner via Digitalmars-d

On Thursday, 23 February 2017 at 22:55:05 UTC, Sönke Ludwig wrote:
The virtual server that is running code.dlang.org has frozen 
about an hour ago and fails to boot further than to the 
bootloader. Initial attempts to recover from within Grub have 
failed and it's unclear what the root cause is. I will instead 
set up a replacement server with the latest backup from 20 
hours ago as soon as it becomes available.


It's really about time to install a permanent backup server to 
keep the registry available in such cases. I will try my best 
to allocate time to make this happen, but I'm severely 
constrained currently and any helping hand would make a big 
difference. What needs to be implemented is a mode for the DUB 
registry [1] that works without the userman database and 
instead just polls code.dlang.org for changes (may also require 
some new REST API end points to be able to acquire all needed 
data).


[1]: https://github.com/dlang/dub-registry/


Thank you for the information.
Not sure if this helps, but if you ever want to failover 
code.dlang.org onto multiple servers  (so that if this happens on 
one of them others can takeover transparently) I can offer an 
IPv6 LXD instance.


Re: Threads not garbage collected ?

2017-02-22 Thread Moritz Maxeiner via Digitalmars-d

On Wednesday, 22 February 2017 at 05:28:17 UTC, Alex wrote:

[...]

In both gdc and dmd I need to use manually delete this object 
or the program is blocked after main. Is by design ?


Yes, it's documented here[1] (others have already replied on the 
GC subject, so I won't go into that). If you want a thread to be 
forcibly terminated instead of preventing program termination you 
need to set `thread.isDaemon = true`.


It seems undesirable, as the thread can be many layers of 
encapsulation down, and they all need manual deletes.


It's highly desirable to not have the program terminate when 
there's still work to be done on some critical non-main thread, 
so the druntime split into daemon vs non-daemon threads makes 
sense.
Whether new threads should be daemonized by default (as opposed 
to preventing program termination by default) is an arbitrary 
decision. In your case, just set `isDaemon = true` for your 
threads and you should be good.


[1] https://dlang.org/phobos/core_thread.html#.Thread.isDaemon


Re: Force inline

2017-02-20 Thread Moritz Maxeiner via Digitalmars-d-learn

On Monday, 20 February 2017 at 12:47:43 UTC, berni wrote:

pragma(inline, true) doesn't work out well:


int bar;

void main(string[] args)
{
   if (foo()) {}
}

bool foo()
{
   pragma(inline, true)

   if (bar==1) return false;
   if (bar==2) return false;

   return true;
}


with


dmd -inline test.d


I get


test.d(8): Error: function test.foo cannot inline function


Because dmd's semantic analysis determined that it doesn't know 
how to inline the function and since you insisted that it must be 
inlined, you received an error. This is an issue with dmd. ldc2 
happily inlines your function:


---
$ ldc2 --version
LDC - the LLVM D compiler (1.1.0):
  based on DMD v2.071.2 and LLVM 3.9.1
  built with DMD64 D Compiler v2.072.2
  Default target: x86_64-pc-linux-gnu
$ ldc2 -c test.d
$ objdump -dr test.o
test.o: file format elf64-x86-64


Disassembly of section .text._Dmain:

 <_Dmain>:
   0:   53  push   %rbx
   1:   48 83 ec 20 sub$0x20,%rsp
   5:   48 89 7c 24 10  mov%rdi,0x10(%rsp)
   a:   48 89 74 24 18  mov%rsi,0x18(%rsp)
   f:	66 48 8d 3d 00 00 00 	data16 lea 0x0(%rip),%rdi# 17 
<_Dmain+0x17>

  16:   00
13: R_X86_64_TLSGD  _D4test3bari-0x4
  17:   66 66 48 e8 00 00 00data16 data16 callq 1f <_Dmain+0x1f>
  1e:   00
1b: R_X86_64_PLT32  __tls_get_addr-0x4
  1f:   8b 18   mov(%rax),%ebx
  21:   83 fb 01cmp$0x1,%ebx
  24:   75 0a   jne30 <_Dmain+0x30>
  26:   31 c0   xor%eax,%eax
  28:   88 c1   mov%al,%cl
  2a:   88 4c 24 0f mov%cl,0xf(%rsp)
  2e:   eb 29   jmp59 <_Dmain+0x59>
  30:	66 48 8d 3d 00 00 00 	data16 lea 0x0(%rip),%rdi# 38 
<_Dmain+0x38>

  37:   00
34: R_X86_64_TLSGD  _D4test3bari-0x4
  38:   66 66 48 e8 00 00 00data16 data16 callq 40 <_Dmain+0x40>
  3f:   00
3c: R_X86_64_PLT32  __tls_get_addr-0x4
  40:   8b 18   mov(%rax),%ebx
  42:   83 fb 02cmp$0x2,%ebx
  45:   75 0a   jne51 <_Dmain+0x51>
  47:   31 c0   xor%eax,%eax
  49:   88 c1   mov%al,%cl
  4b:   88 4c 24 0f mov%cl,0xf(%rsp)
  4f:   eb 08   jmp59 <_Dmain+0x59>
  51:   b0 01   mov$0x1,%al
  53:   88 44 24 0f mov%al,0xf(%rsp)
  57:   eb 00   jmp59 <_Dmain+0x59>
  59:   8a 44 24 0f mov0xf(%rsp),%al
  5d:   a8 01   test   $0x1,%al
  5f:   75 02   jne63 <_Dmain+0x63>
  61:   eb 02   jmp65 <_Dmain+0x65>
  63:   eb 00   jmp65 <_Dmain+0x65>
  65:   31 c0   xor%eax,%eax
  67:   48 83 c4 20 add$0x20,%rsp
  6b:   5b  pop%rbx
  6c:   c3  retq
---



When I remove -inline, it compiles, but seems not to inline. I 
cannot tell from this small example, but with the large 
program, there is no speed gain.


I'd suggest inspecting the generated assembly in order to 
determine whether your function was inlined or not (see above 
using objdump for Linux).




It also compiles with -inline when I remove the "if 
(bar==2)...". I guess, it's now really inlining, but the 
function is ridiculously short...


I don't know, but I'd guess that the length of a function is not 
as important for the consideration of being inlined as its 
semantics.


Re: CTFE Status 2

2017-02-18 Thread Moritz Maxeiner via Digitalmars-d

On Sunday, 19 February 2017 at 01:52:07 UTC, Stefan Koch wrote:
On Saturday, 18 February 2017 at 22:40:44 UTC, Moritz Maxeiner 
wrote:
On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch 
wrote:
Hi Guys, due to the old CTFE status thread getting to page 
30, I am now starting a new one.


[...]


Thank you for your continued work on this. I heavily rely on 
D's CTFE functionality and I try to read all your updates on 
it.


Hello Moritz,
D's ctfe functionality is almost complete.
This thread is not about ctfe as it is currently implemented.
I am working on/writing about a faster re-implementation of 
ctfe.


When my work is finished nothing will change functionality 
wise, it will just be much more efficient.


Hello Stefan,
my apologies if I wasn't clear: I'm aware that this isn't about 
adding anything new in terms of functionality, what I intended to 
imply with "heavily rely on" was that I use enough of it for it 
to be noticeable in compile time / memory consumption, which is 
why I'm very happy about potential speedup and/or memory 
consumption reduction in CTFE. One public example of that was 
llvm-d <2.0.


Re: CTFE Status 2

2017-02-18 Thread Moritz Maxeiner via Digitalmars-d

On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote:
Hi Guys, due to the old CTFE status thread getting to page 30, 
I am now starting a new one.


[...]


Thank you for your continued work on this. I heavily rely on D's 
CTFE functionality and I try to read all your updates on it.


Re: scope with if

2017-02-18 Thread Moritz Maxeiner via Digitalmars-d-learn

On Friday, 17 February 2017 at 20:06:19 UTC, berni wrote:

I wonder if it's possible to do something like this:


import std.stdio;

void main(string[] args)
{
   if (args[1]=="a")
   {
   write("A");
   scope (exit) write("B");
   }

   write("C");
}


I expected the output to be ACB not ABC.


Scope guards are documented here[1][2] and that example behaves 
according to the spec. You can reach what I understood to be your 
objective by implementing the desired functionality on top of a 
scope guard, though:


---
import std.stdio;

void main(string[] args)
{
void delegate()[] finalizers;
scope (exit) foreach (onExit; finalizers) onExit();

if (args.length >= 2 && args[1] == "a")
{
writeln("A");
finalizers ~= { writeln("B"); };
}

writeln("C");
}
---

Keep the following in mind, though[2]:
A scope(exit) or scope(success) statement may not exit with a 
throw, goto, break, continue, or return; nor may it be entered 
with a goto.

i.e. those finalizers must not throw.

On Friday, 17 February 2017 at 20:06:19 UTC, berni wrote:
I understand, that the scope ends at the end of the if, but I 
wonder, if it's possible to have a "conditional scope" or 
something like this.


As shown in the example above, if you want functionality that is 
not provided by the default scope guard behaviour you'll have to 
implement it on top of them.


[1] https://tour.dlang.org/tour/en/gems/scope-guards
[2] https://dlang.org/spec/statement.html#ScopeGuardStatement



Re: Hello, folks! Newbie to D, have some questions!

2017-02-18 Thread Moritz Maxeiner via Digitalmars-d-learn

On Saturday, 18 February 2017 at 20:15:55 UTC, timmyjose wrote:
My rudimentary knowledge of the D ecosystem tells me that there 
is a GC in D, but that can be turned off. Is this correct?


Technically yes; you will lose core functionality, though, if you 
do.
I don't have the complete list at hand, but e.g. dynamic and 
associative arrays are one of the things you won't be able to use 
without the GC IIRC. If you use the reference compiler (dmd), you 
can use the flag `-vgc` to be shown all the GC allocations in a D 
program.


On Saturday, 18 February 2017 at 20:15:55 UTC, timmyjose wrote:
Also, some threads online mention that if we do turn off GC, 
some of the core std libraries may not fully work. Is this 
presumption also correct?


Yes. Everything in Phobos that uses features depending on the GC 
won't work anymore.


On Saturday, 18 February 2017 at 20:15:55 UTC, timmyjose wrote:
In this regard, I am curious to know if I would face any issues 
(with my intent in mind), or will I do just fine?


If you don't turn the GC off you should be fine. The GC will - 
AFAIK - only perform a collection cycle as a result of an 
allocation call to it, so you can avoid slow collection cycles 
without turning it off.


On Saturday, 18 February 2017 at 20:15:55 UTC, timmyjose wrote:
If you could share your experiences and domains of use, that 
would also be very helpful for me.


I mostly use D for writing tools for my own use that have to 
interact with C APIs.


On Saturday, 18 February 2017 at 20:15:55 UTC, timmyjose wrote:
Secondly, how stable is the language and how fast is the pace 
of development on D?


The parts of the language I need are pretty stable, but I don't 
think I use even half of what the language offers (D is very 
complex).
Regarding speed, you can see the numbers (git tags) for yourself 
here[0].


On Saturday, 18 February 2017 at 20:15:55 UTC, timmyjose wrote:
2. I am also curious as to what would be the best path for a 
complete beginner to D to learn it effectively?


That's usually not something someone can tell you, since every 
person learns differently.
Personally, when I started with D (back in D1 days) I read the 
articles about it and then just tried writing tools in it, so I 
suggest reading these[1]


On Saturday, 18 February 2017 at 20:15:55 UTC, timmyjose wrote:
I am a relatively fast learner (and I learn better by context, 
as in, some core unifying idea described and then elucidated 
through big examples instead of learning in bits and pieces).


I'd describe D's unifying idea as "allow people to write complex, 
native software without all the C/C++ insanity". Though D comes 
with it's own share of insanity, of course.


On Saturday, 18 February 2017 at 20:15:55 UTC, timmyjose wrote:
Are there any books/video tutorials that you would recommend 
(aside from this site itself).


I personally would not recommend books at the very start of 
learning a language (if one is already proficient with native 
programming in general), but only after one has already gotten 
comfortable with it and is looking for a comprehensive overview.

Regardless, I've heard good things about two books[2][3].
Since I loathe video tutorials I can't add anything on that point.

On Saturday, 18 February 2017 at 20:15:55 UTC, timmyjose wrote:
3. Are there some small-scale Open Source projects that you 
would recommend to peruse to get a feel for and learn idiomatic 
D?


Technically there's no such thing as idiomatic D as D is 
multi-paradigm. You can see some sensible idioms here[4], but no, 
I would not recommend reading anyone's D code just to get a 
feeling for "idiomatic D".


On Saturday, 18 February 2017 at 20:15:55 UTC, timmyjose wrote:
4. I have heard good reports of D's metaprogramming 
capabilities (ironically enough, primarily from a thread on the 
Rust user group),


This doesn't surprise me, honestly, since Rust's (compile time) 
metaprogramming capabilities are below D's and from my experience 
people in both communities are well aware of that. There are 
threads on Reddit about this topic if you have the time to dig 
them up. D's advanced compile time features are one of the main 
reasons I'm unlikely to switch to anything else for my tools (in 
my experience there is no other native programming language that 
let's me get things done as fast - in terms of development time - 
as D).


On Saturday, 18 February 2017 at 20:15:55 UTC, timmyjose wrote:
and coming from a Common Lisp (and some  Racket) background, I 
am deeply interested in this aspect. Are  D macros as powerful 
as Lisp macros? Are they semantically similar (for instance, I 
found Rust's macros are quite similar to Racket's)?


D does not have macros, it has compile time function 
execution[5], templates[6],

 mixins[7], and template mixins[8].

On Saturday, 18 February 2017 at 20:15:55 UTC, timmyjose wrote:
5. Supposing I devote the time and energy and get up to speed 
on D, would the core language team be welcoming if I feel 

Re: Mallocator and 'shared'

2017-02-14 Thread Moritz Maxeiner via Digitalmars-d-learn

On Tuesday, 14 February 2017 at 14:27:05 UTC, Kagamin wrote:
On Monday, 13 February 2017 at 17:44:10 UTC, Moritz Maxeiner 
wrote:
To be clear: While I might, in general, agree that using 
shared methods only for thread safe methods seems to be a 
sensible restriction, neither language nor compiler require it 
to be so; and absence of evidence of a useful application is 
not evidence of absence.


Right, a private shared method can be a good use case for a 
thread-unsafe shared method.



---
__gshared int f = 0, x = 0;
Object monitor;

// thread 1
synchronized (monitor) while (f == 0);
// Memory barrier required here
synchronized (monitor) writeln(x)

// thread 2
synchronized (monitor) x = 42;
// Memory barrier required here
synchronized (monitor) f = 1;
---


Not sure about this example, it demonstrates a deadlock.


That's beside the point, but I guess I should've clarified the 
"not needed" as "harmful". The point was that memory barriers and 
synchronization are two separate solutions for two separate 
problems and your post scriptum about memory barriers disregards 
that synchronization does not apply to the problem memory 
barriers solve.


On Tuesday, 14 February 2017 at 14:27:05 UTC, Kagamin wrote:


My opinion on the matter of `shared` emitting memory barriers 
is that either the spec and documentation[1] should be updated 
to reflect that sequential consistency is a non-goal of 
`shared` (and if that is decided this should be accompanied by 
an example of how to add memory barriers yourself), or it 
should be implemented.


I'm looking at this in terms of practical consequences and 
useful language features.


So am I.

On Tuesday, 14 February 2017 at 14:27:05 UTC, Kagamin wrote:
What people are supposed to think and do when they see 
"guarantees sequential consistency"? I mean people at large.


That's a documentation issue, however, and is imho not relevant 
to the decision whether one should, or should not emit memory 
barriers. It's only relevant to how the decision is then 
presented to the people at large.


On Tuesday, 14 February 2017 at 14:27:05 UTC, Kagamin wrote:


I agree, message passing is considerably less tricky and 
you're unlikely to shoot yourself in the foot. Nonetheless, 
there are valid use cases where the overhead of MP may not be 
acceptable.


Performance was a reason to not provide barriers. People, who 
are concerned with performance, are even unhappy with virtual 
methods, they won't be happy with barriers on every memory 
access.


You seem to be trying to argue against someone stating memory 
barriers should be emitted automatically, though I don't know why 
you think that's me; You initially stated that
Memory barriers are a bad idea because they don't defend from a 
race condition, but they look like they do
which I rebutted since memory barriers have nothing to do with 
race conditions. Whether memory barriers should automatically 
emitted by the compiler is a separate issue, one on which my 
position btw is that they shouldn't. The current documentation of 
`shared`, however, implies that such an emission (and the related 
sequential consistency) is a goal of `shared` (and just not - 
yet? - implemented) and does not reflect the apparently final 
decision that it's not.




Re: Mallocator and 'shared'

2017-02-14 Thread Moritz Maxeiner via Digitalmars-d-learn
On Tuesday, 14 February 2017 at 13:01:44 UTC, Moritz Maxeiner 
wrote:
Of course, I just wanted to point out that Kagamin's post 
scriptum is a simplification I cannot agree with. As a best 
practice? Sure. As a "never do it"? No.


Sorry for the double post, error in the above, please use this 
instead:


Of course, I just wanted to point out that Kagamin's
Thread unsafe methods shouldn't be marked shared, it doesn't 
make sense
is a simplification I cannot agree with. As a best practice? 
Sure. As a "never do it"? No.


Re: Mallocator and 'shared'

2017-02-14 Thread Moritz Maxeiner via Digitalmars-d-learn

On Tuesday, 14 February 2017 at 10:52:37 UTC, Johannes Pfau wrote:
The compiler of course can't require shared methods to be 
thread-safe as it simply can't prove thread-safety in all 
cases. This is like shared/trusted: You are supposed to make 
sure that a function behaves as expected. The compiler will 
catch some easy to detect mistakes (like calling a non-shared 
method from a shared method <=> system method from safe method) 
but you could always use casts, pointers, ... to fool the 
compiler.


You could use the same argument to mark any method as @trusted. 
Yes it's possible, but it's a very bad idea.


Though I do agree that there might be edge cases: In a single 
core, single threaded environment, should an interrupt function 
be marked as shared? Probably not, as no synchronization is 
required when calling the function.


But if the interrupt accesses a variable and a normal function 
accesses the variable as well, the access needs to be 
'volatile' (not cached into a register by the compiler; not 
closely related to this discussion) and atomic, as the 
interrupt might occur in between multiple partial writes. So 
the variable should be shared, although there's no 
multithreading (in the usual sense).


Of course, I just wanted to point out that Kagamin's post 
scriptum is a simplification I cannot agree with. As a best 
practice? Sure. As a "never do it"? No.


On Tuesday, 14 February 2017 at 10:52:37 UTC, Johannes Pfau wrote:

Am Mon, 13 Feb 2017 17:44:10 +
schrieb Moritz Maxeiner :
you'd still need those memory barriers. Also note that the 
synchronization in the above is not needed in terms of 
semantics.


However, if you move you synchronized to the complete sub-code 
blocks barriers are not necessary. Traditional mutex locking is 
basically a superset and is usually implemented using barriers 
AFAIK. I guess your point is we need to define whether shared 
methods guarantee some sort of sequential consistency?


My point in those paragraphs was that synchronization and memory 
barriers solve two different problems that can occur in 
non-sequential programming and because of Kagamin's statement


Memory barriers are a bad idea because they don't defend from a 
race condition
makes no sense (to me). But yes, I do think that the definition 
should have more background /context and not the current "D FAQ 
states `shared` guarantees sequential consistency (not 
implemented)". Considering how many years that has been the state 
I have personally concluded (for myself and how I deal with D) 
that sequential consistency is a non-goal of `shared`, but what's 
a person new to D supposed to think?


On Tuesday, 14 February 2017 at 10:52:37 UTC, Johannes Pfau wrote:


struct Foo
{
shared void doA() {lock{_tmp = "a";}};
shared void doB() {lock{_tmp = "b";}};
shared getA() {lock{return _tmp;}};
shared getB() {lock{return _tmp;}};
}

thread1:
foo.doB();

thread2:
foo.doA();
auto result = foo.getA(); // could return "b"

I'm not sure how a compiler could prevent such 'logic' bugs.


It's not supposed to. Also, your example does not implement the 
same semantics as what I posted and yes, in your example, there's 
no need for memory barriers. In the example I posted, 
synchronization is not necessary, memory barriers are (and since 
synchronization is likely to have a significantly higher runtime 
cost than memory barriers, why would you want to, even if it were 
possible).


On Tuesday, 14 February 2017 at 10:52:37 UTC, Johannes Pfau wrote:
However, I think it should be considered a best practice to 
always make a shared function a self-contained entity so that 
calling any other function in any order does not negatively 
effect the results. Though that might not always be possible.


Yes, that matches what I tried to express.

On Tuesday, 14 February 2017 at 10:52:37 UTC, Johannes Pfau wrote:

Am Mon, 13 Feb 2017 17:44:10 +
schrieb Moritz Maxeiner :
My opinion on the matter of `shared` emitting memory barriers 
is that either the spec and documentation[1] should be updated 
to reflect that sequential consistency is a non-goal of 
`shared` (and if that is decided this should be accompanied by 
an example of how to add memory barriers yourself), or it 
should be implemented. Though leaving it in the current "not 
implemented, no comment / plan on whether/when it will be 
implemented" state seems to have little practical consequence 
- since no one seems to actually work on this level in D - and 
I can thus understand why dealing with that is just not a 
priority.


I remember some discussions about this some years ago and IIRC 
the final decision was that the compiler will not magically 
insert any barriers for shared variables. Instead we have 
well-defined intrinsics in std.atomic dealing with this. Of 
course most of this stuff isn't implemented (no shared support 
in core.sync).


-- Johannes


Good to know, thanks, I seem to have missed 

Re: Mallocator and 'shared'

2017-02-13 Thread Moritz Maxeiner via Digitalmars-d-learn

On Monday, 13 February 2017 at 14:20:05 UTC, Kagamin wrote:
Thread unsafe methods shouldn't be marked shared, it doesn't 
make sense. If you don't want to provide thread-safe interface, 
don't mark methods as shared, so they will not be callable on a 
shared instance and thus the user will be unable to use the 
shared object instance and hence will know the object is thread 
unsafe and needs manual synchronization.


To be clear: While I might, in general, agree that using shared 
methods only for thread safe methods seems to be a sensible 
restriction, neither language nor compiler require it to be so; 
and absence of evidence of a useful application is not evidence 
of absence.


On Monday, 13 February 2017 at 14:20:05 UTC, Kagamin wrote:
ps Memory barriers are a bad idea because they don't defend 
from a race condition, but they look like they do :)


There are two very common pitfalls in non-sequential programming 
with regards to reads/writes to memory shared between threads:
Issue 1: Sequencing/Interleaving of several threads into the 
logical memory access order

Issue 2: Reordering of code within one thread

Code that changes semantics because of issue 1 has race 
conditions; fixing it requires synchronization primitives, such 
as locking opcode, transactional memory, etc.


Code that changes semantics because of issue 2 may or may not 
have race conditions, but it definitely requires memory barriers.


Claiming that memory barriers are a bad idea because they don't 
defend against race conditions, but look like they do (when 
that's what synchronization is for) is similar enough to saying 
airbags in cars are a bad idea because they don't keep your body 
in place, but look like they do (when that's what seat belts are 
for).
My point here being that I don't understand what made you state 
that memory barriers look like they deal with race conditions, as 
they have nothing to do with that.


To be clear: Synchronization (the fix for race conditions) does 
not help you to deal with issue 2. If my last example had instead 
been


---
__gshared int f = 0, x = 0;
Object monitor;

// thread 1
synchronized (monitor) while (f == 0);
// Memory barrier required here
synchronized (monitor) writeln(x)

// thread 2
synchronized (monitor) x = 42;
// Memory barrier required here
synchronized (monitor) f = 1;
---

you'd still need those memory barriers. Also note that the 
synchronization in the above is not needed in terms of semantics. 
The code has no race conditions, all permutations of the 
(interleaved) memory access order yield the same output from 
thread 1. Also, since synchronization primitives and memory 
barriers have different runtime costs, depending on your hardware 
support and how they are translated to that support from D, 
there's no "one size fits all" solution on the low level we're on 
here.


My opinion on the matter of `shared` emitting memory barriers is 
that either the spec and documentation[1] should be updated to 
reflect that sequential consistency is a non-goal of `shared` 
(and if that is decided this should be accompanied by an example 
of how to add memory barriers yourself), or it should be 
implemented. Though leaving it in the current "not implemented, 
no comment / plan on whether/when it will be implemented" state 
seems to have little practical consequence - since no one seems 
to actually work on this level in D - and I can thus understand 
why dealing with that is just not a priority.


On Monday, 13 February 2017 at 14:20:05 UTC, Kagamin wrote:
use std.concurrency for a simple and safe concurrency, that's 
what it's made for.


I agree, message passing is considerably less tricky and you're 
unlikely to shoot yourself in the foot. Nonetheless, there are 
valid use cases where the overhead of MP may not be acceptable.


[1] https://dlang.org/faq.html#shared_guarantees


Re: Mallocator and 'shared'

2017-02-12 Thread Moritz Maxeiner via Digitalmars-d-learn

On Monday, 13 February 2017 at 01:30:57 UTC, ag0aep6g wrote:
This doesn't make sense to me. b depends on a. If I run thread 
1 alone, I can expect b to be 1, no?  Thread 2 can then a) read 
0, write 1; or b) read 1, write 2. How can b be 0 when the 
writeln is executed?


An example like this makes more sense to me:


shared int a = 0, b = 0;

// Thread 1:
a = 1;
b = 2;

// Thread 2:
writeln(a + b);


One might think that thread 2 cannot print "2", because from 
the order of statements the numbers must be 0 + 0 or 1 + 0 or 1 
+ 2. But thread 1 might execute `b = 2` before `a = 1`, because 
the order doesn't matter to thread 1. So 0 + 2 is possible, too.


You're right, of course, and I shall do well to remember not to 
think up examples for non-sequential code at such an hour, I am 
sorry. Thank you for providing a correct example plus 
explanation. The rest of my post still stands, though.
In recompense I shall provide another example, this one 
translated from Wikipedia[1] instead:


__gshared int f = 0, x = 0;

// thread 1
while (f == 0);
// Memory barrier required here
writeln(x)

// thread 2
x = 42;
// Memory barrier required here
f = 1;

The above demonstrates a case where you do need a memory barrier:
thread 1 and 2 have a consumer/producer relationship, where 
thread 1 wants to consume a value from thread 2 via `x`, using 
`f` to be notified that `x` is ready to be consumed;
Without memory barriers at both of the indicated lines the cpu is 
free to reorder either thread:
The first is required so that thread 1 doesn't get reordered to 
consume before being notified and the second so that thread 2 
doesn't get reordered to signal thread 1 before having produced.


If we had made `f` and `x` `shared` instead of `__gshared` the 
spec would require (at least) the two indicated memory barriers 
being emitted. Currently, though, it won't and for this case 
(AFAIK) `shared` won't get you any benefit I'm aware of over 
`__gshared`. You'll still need to add those memory barriers 
(probably using inline assembler, though I'm not sure what's the 
best way is in D, since I usually just stick with message 
passing).


[1] https://en.wikipedia.org/wiki/Memory_barrier


Re: Mallocator and 'shared'

2017-02-12 Thread Moritz Maxeiner via Digitalmars-d-learn

On Sunday, 12 February 2017 at 20:08:05 UTC, bitwise wrote:
It seems that methods qualified with 'shared' may be what 
you're suggesting matches up with the 'bridge' I'm trying to 
describe, but again, using the word 'shared' to mean both 
'thread safe' and 'not thread safe' doesn't make sense. [...]


For essentially all that follows, refer to [1][2]
`shared` (as well as `__gshared`) on a variable has the semantics 
of multiple threads sharing one single memory location for that 
variable (i.e. it will not be put into thread local storage). 
Accessing such data directly is inherently not thread safe. 
Period. You will need some form of synchronization (see [3]) to 
access such data in a thread safe manner.
Now, `shared` is supposed to additionally provide memory 
barriers, so that reads/writes on such variables are guaranteed 
not to be reordered in a way that breaks your algorithm; 
remember, the compiler (and also later the cpu when it reorders 
the opcode) is allowed to reorder reads/writes to a memory 
location to be more efficient, as long as doing so won't change 
the logic as the compiler/or cpu sees it. Example:


__gshared int a = 0;

// thread 1:
a = 1;
int b = a;
writeln(b);

// thread 2:
a += 1;

In the above, you may expect `b` to be either 1, or 2, depending 
on how the cpu interleaves the memory access, but it can, in 
fact, also be 0, since neither the compiler, nor the cpu can 
detect any reason as to why `a = 1` should need to come before 
`int b = a` and may thus reorder the write and the read. Memory 
barriers prevent such reordering in the cpu and if we had made 
`a` `shared` those barriers would've been supposed to be emitted 
by the compiler (in addition to not reordering them itself). 
Unfortunately, that emission is not implemented.


From [4]:
Non-static member functions can have, in addition to the usual 
FunctionAttributes, the attributes const, immutable, shared, or 
inout. These attributes apply to the hidden this parameter.


Thus a member function being `shared` means nothing more than 
that the instance it is called on must also be `shared`, i.e.


class Foo
{
shared void bar();
}

Foo foo;
foo.bar();// this is illegal, `foo` (the hidden `this` of 
`bar`) is not shared


shared Foo foobar;
foobar.bar(); // this is legal, since `foobar` is shared

That's it, there are no two meanings of `shared` depending on 
some context, there is only one: The data in question, which is 
either the attributed variable, or the object/instance of the 
member function being attributed, is shared between threads and 
accessing it directly is not thread safe.


On Sunday, 12 February 2017 at 20:08:05 UTC, bitwise wrote:
I thought 'shared' was a finished feature, but it's starting to 
seem like it's a WIP.


I prefer the term "unfinished" since "WIP" implies that it's 
being worked on. AFAIK there's no one currently working on 
implementing what's missing in the compiler frontend with regards 
to the spec.


On Sunday, 12 February 2017 at 20:08:05 UTC, bitwise wrote:
This kind of feature seems like it has great potential, but is 
mostly useless in it's current state.


I share that opinion and generally either use `__gshared` if I 
absolutely have to share data via shared memory and design 
carefully to avoid all the potential issues, or - which I much 
prefer - use message passing: `std.concurrency` is your friend.


On Sunday, 12 February 2017 at 20:08:05 UTC, bitwise wrote:
After more testing with shared, it seems that 'shared' data is 
mutable from many contexts, from which it would be unsafe to 
mutate it without locking first, which basically removes any 
gauruntee that would make 'shared' useful.


As pointed out above that's to be expected, since that's its job. 
Regarding guarantees: Since D treats data as thread local by 
default, you need either `shared` or `__gshared` to have mutable 
shared (intra-process) memory (ignoring OS facilities for 
inter-process shared memory). The main advantage is not in data 
being `shared`/`__gshared`, but in the guarantees that all the 
other (unattributed, thread local) data gets: Each thread has its 
own copies and any optimisations applied to code that accesses 
them need not consider multiple threads (I'd wager this is a 
significant boon towards D's fast compile times).
If you only talk about useful benefits of `shared` over 
`__gshared`, if the spec were properly implemented, the useful 
properties would include you not needing to worry about memory 
barriers. Other useful guaranties are the more rigorous type 
checks, when compared to `__gshared`, which are supposed to 
prevent you from committing some of the common mistakes occurring 
in non-sequential programming (see, e.g. the code example with 
`class Foo` above).


On Sunday, 12 February 2017 at 20:08:05 UTC, bitwise wrote:
Again, tell me if I'm wrong here, but there seems to be a lot 
of holes in 'shared'.


There are holes in the implementation of `shared`; it's spec, 

Re: Questionnaire

2017-02-08 Thread Moritz Maxeiner via Digitalmars-d-announce
On Wednesday, 8 February 2017 at 18:27:57 UTC, Ilya Yaroshenko 
wrote:

1. Why your company uses  D?

  a. D is the best
  b. We like D
  c. I like D and my company allowed me to use D
  d. My head like D
  e. Because marketing reasons
  f. Because my company can be more efficient with D for some 
tasks then with any other system language


I use D only privately so far.



2. Does your company uses C/C++, Java, Scala, Go, Rust?


I've seen C, C++, and Java being used.



3. If yes, what the reasons to do not use D instead?


Nobody ever heard of the language (this holds true pretty much in 
every discussion I have on the topic)




2. Have you use one of the following Mir projects in production:

  a. https://github.com/libmir/mir
  b. https://github.com/libmir/mir-algorithm
  c. https://github.com/libmir/mir-cpuid
  d. https://github.com/libmir/mir-random
  e. https://github.com/libmir/dcv - D Computer Vision Library
  f. std.experimental.ndslice



No.

3. If Yes, can Mir community use your company's logo in a 
section "Used by" or similar.




N/A

4. Have you use one of the following Tamedia projects in your 
production:


  a. https://github.com/tamediadigital/asdf
  b. https://github.com/tamediadigital/je
  c. https://github.com/tamediadigital/lincount



I've used asdf for configuration files[1][2], it works very well 
for shortening development time.



5. What D misses to be commercially successful languages?


My two cents:
- "Name" backing by a well-known (i.e. internationally famous) 
corporation/foundation

- Viral marketing ("spread the D")
- Fix or removal of all the little things that may make someone 
go "ugh, wtf?". I'm looking at you, `shared`, and your missing 
memory barriers[5], or you, `std.parallelism.taskPool`, and your 
non-daemon "daemon" threads[6]. Privately I can work around them 
since it's my own time, but I don't expect many people in big 
companies (see first point) with a deadline to want to put up 
with that.

- Tooling, though that's been getting better
- Phobos without GC (where possible)
- std.experimental.allocator -> std.allocator and promote it as 
*the* memory management interface for D. Seriously. With it I can 
even allocate and pass delegates to C in an intuitive way (see 
[3] and [4]).




6. Why many topnotch system projects use C programming language 
nowadays?


Don't know if the premise holds, but if it does I'd wager it's 
because people who *do* write topnotch (system) software can do 
so in *any* (system) language that's asked of them - since in the 
end the topnotch comes from the person writing the code, not the 
language ("ignorance (of a language) can be remedied, stupid is 
forever") - and C has the de facto corporate monopoly of being 
asked to write in.



[1] https://git.ucworks.org/UCWorks/dagobar/tree/master
[2] https://git.ucworks.org/UCWorks/tunneled/tree/master
[3] 
https://git.ucworks.org/UCWorks/dagobar/blob/master/source/libuv.d#L125
[4] 
https://git.ucworks.org/UCWorks/dagobar/blob/master/source/libuv.d#L159

[5] https://dlang.org/faq.html#shared_guarantees
[6] https://issues.dlang.org/show_bug.cgi?id=16324


llvm-d 2.0

2017-01-28 Thread Moritz Maxeiner via Digitalmars-d-announce
New major release of `llvm-d` with some backwards-incompatible 
API changes, please

read the release message for the details. Cliffnotes:
- Just `import llvm`
- Remove `LLVM.load`, (dynamically) link against the appropriate 
library/libraries
- Set a D version `LLVM_Target_XyZ' for every LLVM target 'XyZ' 
you need.


https://github.com/Calrama/llvm-d/releases/v2.0.0
https://code.dlang.org/packages/llvm-d/2.0.0

- Moritz


Re: std.parallelism.taskPool daemon threads not terminating

2016-06-17 Thread Moritz Maxeiner via Digitalmars-d-learn

On Friday, 17 June 2016 at 14:29:57 UTC, Russel Winder wrote:


A priori, assuming I am not missing anything, this behaviour 
seems entirely reasonable.


I agree that when using non-daemon threads (and I personally 
think that should be the default) that it is. But I cannot bring 
that into accord with the documentation of taskPool (the 
property)[1]:



Returns a lazily initialized global instantiation of TaskPool.
[...]
The worker threads in this pool are daemon threads, meaning 
that it is not necessary to call TaskPool.stop or 
TaskPool.finish before terminating the main thread.


A daemon thread is automatically terminated when all non-daemon 
threads have terminated.
A non-daemon thread will prevent a program from terminating as 
long as it has not terminated.


The above - while not explicitly stating that daemon-threads do 
not prevent a program from terminating - are - to me - strongly 
suggesting it (and if they do indeed not, then I would ask how 
daemon threads are differnt from non-daemon threads in the 
context of TaskPool, since I'm unable to make it out from the 
documentation).


The task is an infinite loop so it never terminates. This means 
the threadpool does not stop working, which means the program 
does not terminate.


Yes, that example is intentionally chosen that way to make my 
point. I initially discovered this while putting of a synchronous 
read of STDIN in a loop, but that example might have diverted 
attention to something other than I intended.




I suspect that daemon may not mean what you think it means. At 
least not with respect to the threadpool.


I do, too, which is why I asked here, since after having read the 
relevant documentation several times with significant time delay 
in between I still cannot make out how else to interpret it (and 
I got no reply in #dlang IRC).


[1] https://dlang.org/library/std/parallelism/task_pool.html


std.parallelism.taskPool daemon threads not terminating

2016-06-16 Thread Moritz Maxeiner via Digitalmars-d-learn

So, I am probably overlooking something obvious, but here goes:
According to my understanding of daemon threads and what is 
documented here[1],
this following program should terminate once the druntime shuts 
down, as the thread working on the task is supposed to be a 
daemon thread:



import std.parallelism;

void main()
{
taskPool.put(task({ while(true) {} }));
}


The actual behaviour (with dmd 2.071 and ldc2 1.0.0), however, is 
that the program keeps running.


In contract, this behaves as expected:


import core.thread;

void main()
{
   with (new Thread({ while(true) {} })) {
   isDaemon = true;
   start();
   }
}


Commenting out setting the isDaemon property will achieve the 
same behaviour as the taskPool example. Is this the intended 
behaviour of taskPool (because it does have isDaemon set)?



[1] https://dlang.org/library/std/parallelism/task_pool.html


Re: DConf 2016 news: 20% sold out, book signing

2015-12-07 Thread Moritz Maxeiner via Digitalmars-d-announce
On Monday, 7 December 2015 at 17:39:14 UTC, Andrei Alexandrescu 
wrote:

We're over 20% full and seats are going fast!

We planned to send an announcement when we're 50% sold out. 
However, this time around registrations are coming quite a bit 
quicker than before so we thought we'd keep you posted earlier.


[...]


A fellow student and me just booked.
Looking forward to it quite a bit :)

calrama


Re: [OT] Regarding most used operating system among devs

2015-04-12 Thread Moritz Maxeiner via Digitalmars-d

On Thursday, 9 April 2015 at 18:28:46 UTC, Marco Leise wrote:

Am Wed, 08 Apr 2015 13:05:01 +
schrieb Szymon Gatner noem...@gmail.com:


On Wednesday, 8 April 2015 at 12:34:06 UTC, Paulo  Pinto wrote:

 Since then, I always favor spaces over tabs. One space is 
 always one space.



Not to start a war but agreed ;) 2 spaces (specifically) FTW!


You see, there's the reason why we tab users use tabs.


Even though I will probably get the same response as shown here 
[1], I consider the most sensible (and imho the technically 
correct) way to be both:
- Tabs for indentation, because one tab is always one indentation 
level, regardless of the actual visual width.
- Spaces for alignment, because inside one indentation level you 
should be able to align code without interacting with the 
indentation level at all, keeping the two separate.


This way allows any reader to always have correct indentation and 
alignment, while still being able to choose how wide one 
indentation level should be rendered as.


As such: Smart Tabs FTW!

[1] http://www.emacswiki.org/emacs/SmartTabs


Re: A reason to choose D over Go

2015-03-28 Thread Moritz Maxeiner via Digitalmars-d

On Sunday, 22 March 2015 at 01:44:32 UTC, weaselcat wrote:

On Sunday, 22 March 2015 at 01:24:10 UTC, Martin Nowak wrote:

On Saturday, 21 March 2015 at 23:49:26 UTC, Atila Neves wrote:

I actually think that there are two large categories of
programmers: those like writing the same loops over and over
again and those who use algorithms.


I agree, at some point I learned that there is a huge cultural 
distinction between C and C++ programmers.


yes, the other main distinction are the people who correctly 
put the * next to the type because it's part of the type, or 
the wrong people who put it next to the variable name because 
they're heathens


+1


Re: const as default for variables

2015-03-19 Thread Moritz Maxeiner via Digitalmars-d

On Saturday, 14 March 2015 at 20:15:30 UTC, Walter Bright wrote:
I've often thought, as do many others here, that immutability 
should be the default for variables.


Case (1) is what I'm talking about here. If it is made const, 
then there are a couple ways forward in declaring a mutable 
variable:


The following is just my point of view, so take it with a grain 
of salt and correct me if I state/understand something wrong:
I usually abstain from participating in discussions here, because 
more often than not someone else will already more or less write 
what I would, so there is little point in my writing what has 
already been posted. This issue, however, I consider fairly 
important, as what you propose would make me classify D as don't 
touch, which I really don't want considering that I've been 
following and using D for the better part of ten years; let me 
explain why:


There exists an abstract amount of data that I want to store 
somewhere and access within my program. I shall call one instance 
of something I put my data into storage entity (SE for short). 
Now depending on what properties my data  inherently has (or I 
may additionally attribute to it) I may want or need a SE to

- allow any data within it to be changed ([wholly] mutable)
- prohibit any data within it to be changed ([wholly] immutable)
- allow some of the data within it to be changed (partially 
mutable)
[Here and unless otherwise stated I do not use immutable in the 
transitive meaning that D currently applies to it, instead it is 
only applied to one SE]


The first of the three is what is generally in computer science 
(CS) called a variable, the second a constant. SEs of the third 
type are also mostly referred to as variables, as they are 
usually implemented as an extension to the first. I know that 
from a mathematical standpoint, a variable is only a symbol with 
an attributed value without any associated notion about 
(im)mutability, so even a contant would be a variable, but this 
is not how the terminology is used in CS. In CS a variable must 
allow some kind of mutability; not necessarily wholly, but 
without mutability it would be a constant, not a variable. As 
such, should D's SEs default to being wholly immutable (which you 
seem to propose), it should not call them variables anymore 
(since they aren't), but instead clearly state that D's SE 
default to being constants and if you want a variable, do [...].


With only primitives (no pointers), there can be no partial 
mutability, you are either allowed to assign a new (primitive) 
value or you are not. Partial mutability becomes a serious 
concern, however, once pointers/references are involved, e.g. if 
you want to reference an SE that is wholly immutable. Does your 
reference automatically also become immutable (as I understand if 
- and please correct me if I am wrong here - this is what D's 
transitive immutable means)? I understand that with this 
extremely complex issue, it may seem desirably to instead default 
to whole non-transtitive immutability and make people explicitly 
state when they want their SEs to be mutable. One might argue 
that it would make a lot of things simpler for everyone involved.


However, D is a systems programming language and I would 
counter-argue that I believe the amount of partially mutable SEs 
to far outweight the amount of wholly immutable ones and having 
something like

int foo = 5;
[...]
foo = 6;
produce a compile-error because foo is by default 
non-transitive immutable [D terminology would be const I think] 
is something I can only call absurd for the following reason:


It breaks with the convention systems programming languages have 
been using for a very long time. While I'm not generally against 
cutting off traditions no longer needed, I believe this would 
have a serious negative impact on people coming form C/C++ who 
are expecting new cool stuff (which D definitely has) without 
ground-breaking changes. The longer the list of core differences 
to the way you code you have to remind yourself about when 
switching to D, the less likely you will switch to D, I think.


What I would propose is the following: Have the compiler-frontend 
track for all SE whether they are assigned to more than once 
(counting the initial assignment). Any SE that isn't can safely 
be marked const (or non-transitive immutable) the way to 
described in your opening post. However, is an SE assigned to at 
least twice is must not be marked as const. This should - in my 
opinion - give about the same level of safety as marking 
everything as const by default while not breaking with any 
long-standing conventions.


Re: What exactly shared means?

2015-01-02 Thread Moritz Maxeiner via Digitalmars-d-learn

On Friday, 2 January 2015 at 11:47:47 UTC, Daniel Kozak wrote:
I always think that shared should be use to make variable 
global across threads (similar to __gshared) with some 
synchronize protection. But this code doesn't work (app is 
stuck on _aaGetX or _aaRehash ):



But when I add synchronized block it is OK:



I am not aware of any changes since the following thread (see the 
second post): 
http://forum.dlang.org/thread/brpbjefcgauuzguyi...@forum.dlang.org#post-mailman.679.1336909909.24740.digitalmars-d-learn:40puremagic.com


So AFAIK shared is currently nothing more than a compiler hint 
(despite the documentation suggesting otherwise (Second to last 
paragraph of __gshared doc compares it to shared, see 
http://dlang.org/attribute.html).


My current understanding is that you either use __gshared and 
do your own synchronisation, or you use thread local storage, 
i.e. do not use shared, but I would be happy to be proven wrong 
on that point.


BTW, you can use the following search to find more information 
(It is was I used to find the above linked thread): 
https://www.google.com/?#q=site:forum.dlang.org+shared


Re: on interfacing w/C++

2014-04-16 Thread Moritz Maxeiner via Digitalmars-d

On Tuesday, 15 April 2014 at 11:04:42 UTC, Daniel Murphy wrote:
Manu via Digitalmars-d digitalmars-d@puremagic.com wrote in 
message 
news:mailman.9.1397553786.2763.digitalmar...@puremagic.com...



Huh? Do methods work now? Since when?


Since I needed them for DDMD.



Is this[1] then out of date and I can interface with non-virtual 
methods? Because that's what your post seems to imply (unless I 
misunderstood).


[1] http://dlang.org/cpp_interface.html



Re: on interfacing w/C++

2014-04-16 Thread Moritz Maxeiner via Digitalmars-d

On Wednesday, 16 April 2014 at 14:00:24 UTC, Daniel Murphy wrote:
Moritz Maxeiner  wrote in message 
news:kvzwlecwougswrqka...@forum.dlang.org...


Is this[1] then out of date and I can interface with 
non-virtual methods? Because that's what your post seems to 
imply (unless I misunderstood).


[1] http://dlang.org/cpp_interface.html


Yes.  The best place to look for concrete examples of what is 
supported is probably the C++ tests in the test suite.  (ie 
files containing EXTRA_CPP_SOURCES)


That sounds very cool, I've had a look at [1] and [2], which seem 
to be the two files with the new C++ class interfacing. As far as 
I could tell, you need to create any instances of C++ classes 
with C++ code / you don't bind to the constructors directly from 
D and the new instance will not be managed by D's GC? Because if 
I used this new interfacing for e.g. llvm-d, I need to be sure, 
that D's GC won't touch any of the instances under any 
circumstances, since they are freed by LLVM's internal logic they 
GC cannot track.


[1] 
https://github.com/D-Programming-Language/dmd/blob/master/test/runnable/externmangle.d
[2] 
https://github.com/D-Programming-Language/dmd/blob/master/test/runnable/extra-files/externmangle.cpp


<    1   2   3   4   5   6