Re: D Language Foundation October 2023 Quarterly Meeting Summary

2023-12-11 Thread Timon Gehr via Digitalmars-d-announce

On 12/11/23 20:55, Timon Gehr wrote:


There is the following trick. Not ideal since the length cannot be 
inferred, but this successfully injects alloca into the caller's scope.




I see Nick already brought it up.



Re: D Language Foundation October 2023 Quarterly Meeting Summary

2023-12-11 Thread Timon Gehr via Digitalmars-d-announce

On 12/6/23 17:28, Mike Parker wrote:



One way to do that in D is to use `alloca`, but that's an issue because 
the memory it allocates has to be used in the same function that calls 
the `alloca`. So you can't, e.g., use `alloca` to alloc memory in a 
constructor, and that prevents using it in a custom array 
implementation. He couldn't think of a way to translate it.


There is the following trick. Not ideal since the length cannot be 
inferred, but this successfully injects alloca into the caller's scope.


```d
import core.stdc.stdlib:alloca;
import std.range:ElementType;
import core.lifetime:moveEmplace;

struct VLA(T,alias len){
T[] storage;
this(R)(R initializer,return void[] 
storage=alloca(len*T.sizeof)[0..len*T.sizeof]){

this.storage=cast(T[])storage;
foreach(ref element;this.storage){
assert(!initializer.empty);
auto init=initializer.front;
moveEmplace!T(init,element);
initializer.popFront();
}
}
ref T opIndex(size_t i)return{ return storage[i]; }
T[] opSlice()return{ return storage; }
}

auto vla(alias len,R)(R initializer,void[] 
storage=alloca(len*ElementType!R.sizeof)[0..len*ElementType!R.sizeof]){

return VLA!(ElementType!R,len)(initializer,storage);
}

void main(){
import std.stdio,std.string,std.conv,std.range;
int x=readln.strip.to!int;
writeln(vla!x(2.repeat(x))[]);
}
```



Re: opIndexAssign

2023-10-05 Thread Timon Gehr via Digitalmars-d-learn

On 10/3/23 00:11, Salih Dincer wrote:

Hi,

opIndexAssign, which is void, cannot compromise with opIndex, which is a 
ref!  Solution: Using opSliceAssign.  Could this be a bug?  Because 
there is no problem in older versions (e.g. v2.0.83).


```d
struct S
{
   int[] i;

   ref opIndex(size_t index) => i[index];

   auto opSliceAssign/*
   auto opIndexAssign//*/
   (int value) => i[] = value;
}

void main()
{
   auto s = S([1, 2]);

   s[] = 2;
   assert(s.i == [2, 2]);

   s[1] = 42;
   assert(s.i == [2, 42]);
}
```
**Source:** https://run.dlang.io/is/G3iBEw
If you converted comment line 7 with // and you will see this error:

onlineapp.d(19): Error: function `onlineapp.S.opIndexAssign(int 
value)` is not callable using argument types `(int, int)`

onlineapp.d(19):    expected 1 argument(s), not 2


SDB@79



void opIndexAssign(int value, int index){ i[index] = value; }


Re: Problem with dmd-2.104.0 -dip1000 & @safe

2023-06-11 Thread Timon Gehr via Digitalmars-d-learn

On 6/9/23 06:05, An Pham wrote:

Getting with below error for following codes. Look like bug?
onlineapp.d(61): Error: scope variable `a` assigned to non-scope 
parameter `a` calling `foo`


     @safe:

     struct A(S = string)
     {
     @safe:
     S s;
     void delegate() c;
     }

     struct B(S = string)
     {
     @safe:
     @disable this();

     this(C!S c, A!S a)
     {
     this.c = c;
     this.a = a;
     }

     C!S foo()
     {
     return c;
     }

     A!S a;
     C!S c;
     }

     class C(S = string)
     {
     @safe:
     C!S foo(A!S a)
     {
     auto o = new Object();
     return foo2(o, a);
     }

     C!S foo2(Object n, A!S a)
     {
     auto b = B!S(this, a);
  return b.foo();
     }
     }

     unittest
     {
     static struct X
     {
     @safe:
     void foo3()
     {
     }
     }

     X x;
     A!string a;
     a.s = "foo";
     a.c = 
     auto c = new C!string();
     c.foo(a);
     }

void main()
{
}



I think the behavior you are seeing here is by design. There are two 
things happening:


- There is no `scope` inference for virtual methods, as it is impossible 
to get the inference right without knowing all overriding methods in 
advance.


- You cannot mark the parameter `a` `scope`, because the lifetimes of 
`this` and `a` become conflated within `b` in the body of `foo2` when 
calling the constructor of `B!S`, and `this` is subsequently escaped.


As Dennis points out, a workaround is to mark the parameter `a` `return 
scope`. However, this may lead to other problems down the line, as `a` 
is never actually escaped.


Re: Why are globals set to tls by default? and why is fast code ugly by default?

2023-04-01 Thread Timon Gehr via Digitalmars-d-learn

On 4/1/23 17:02, Ali Çehreli wrote:


Does anyone have documentation on why Rust and Zip does not do thread 
local by default?


Rust just does not do mutable globals except in unsafe code.


Re: Safer Linux Kernel Modules Using the D Programming Language

2023-01-16 Thread Timon Gehr via Digitalmars-d-announce

On 1/12/23 07:25, Walter Bright wrote:


But also, adding dynamic arrays to C won't make the currently existing 
C code safer, the one they care about, because no one's gonna send the 
money to update their C89/99/whatever code to C23/26. Even if they 
did, there's no guarantee others would as well.


You can incrementally fix code, as I do with the dmd source code 
(originally in C) regularly.


Yes; _source code_. This is the crux of the matter. Can't incrementally 
fix source code that you don't have access to.


Re: DIP 1043---Shortened Method Syntax---Accepted

2022-09-21 Thread Timon Gehr via Digitalmars-d-announce

On 21.09.22 12:39, Mike Parker wrote:

DIP 1043, "Shortened Method Syntax", has been accepted.

The fact that the feature was already implemented behind a preview 
switch carried weight with Atila. He noted that, if not for that, he 
wasn't sure where he would stand on adding the feature, but he could see 
no reason to reject it now.


Walter accepted with a suggested (not a required) enhancement:

It could be even shorter. For functions with no arguments, the () 
could be

omitted, because the => token will still make it unambiguous.


For example:

    T front() => from;

becomes:

    T front => from;


As DIP author, Max decided against this. He said it's not a bad idea, 
but it's then "inconsistent with other the other syntaxes". If there is 
a demand for this, it would be easy to add later, but he felt it's 
better to keep things simple for now by going with the current 
implementation as is.




Great news! :)


Re: Giving up

2022-08-06 Thread Timon Gehr via Digitalmars-d-announce

On 8/6/22 19:27, Rumbu wrote:

On Saturday, 6 August 2022 at 08:29:19 UTC, Walter Bright wrote:

On 8/5/2022 9:43 AM, Max Samukha wrote:
Both "123." and "123.E123" is valid C. For some reason, D only copied 
the former.


It's to support UFCS (Universal Function Call Syntax). The idea with C 
compatible aspects of D is to not *silently* break code when there's a 
different meaning for it. And so, these generate an error message in D 
(although the error message could be much better).


So, does it work with ImportC?

test2.c:
  float z = 85886696878585969769557975866955695.E0;
  long double x = 0x1p-16383;

dmd -c test2.c
  test2.c(3): Error: number `0x1p-16383` is not representable



It is. Since real exponent is biased by 16383 (15 bits), it is 
equivalent of all exponent bits set to 0. Probably it looks unimportant, 
but here it was about a floating point library. Subnormal values are 
part of the floating point standard.


Seems you should just use a long double/real literal?

real x = 0x1p-16383L; // (works)


Re: Is this a violation of const?

2022-07-30 Thread Timon Gehr via Digitalmars-d-learn

On 7/30/22 15:19, Salih Dincer wrote:

On Saturday, 30 July 2022 at 10:02:50 UTC, Timon Gehr wrote:


It's a `const` hole, plain and simple.


This code, which consists of 26 lines, does not compile in DMD 2.087.  I 
am getting this error:


constHole.d(15): Error: mutable method `source.Updater.opCall` is not 
callable using a `const` object
constHole.d(15):    Consider adding `const` or `inout` to 
source.Updater.opCall
constHole.d(21): Error: function `source.Updater.opCall(string s)` is 
not callable using argument types `(string*)`
constHole.d(21):    cannot pass argument `` of type `string*` to 
parameter `string s`


SDB@79


Exactly. This is my point. This code does not compile, and neither 
should the original version, because it's doing basically the same thing.


Re: Is this a violation of const?

2022-07-30 Thread Timon Gehr via Digitalmars-d-learn

On 7/30/22 00:16, H. S. Teoh wrote:

On Fri, Jul 29, 2022 at 09:56:20PM +, Andrey Zherikov via 
Digitalmars-d-learn wrote:

In the example below `func` changes its `const*` argument. Does this
violates D's constness?

```d
import std;

struct S
{
 string s;

 void delegate(string s) update;
}

void func(const S* s)
{
 writeln(*s);
 s.update("func");
 writeln(*s);
}

void main()
{
 auto s = S("test");
 s.update = (_) { s.s = _; };

 writeln(s);
 func();
 writeln(s);
}
```

The output is:
```
S("test", void delegate(string))
const(S)("test", void delegate(string))
const(S)("func", void delegate(string))
S("func", void delegate(string))
```


At first I thought this was a bug in the const system,


It very much is. https://issues.dlang.org/show_bug.cgi?id=9149
(Note that the fix proposed in the first post is not right, it's the 
call that should be disallowed.)



but upon closer
inspection, this is expected behaviour. The reason is, `const`
guarantees no changes *only on the part of the recipient* of the `const`
reference;


The delegate _is_ the recipient of the delegate call. The code is 
calling a mutable method on a `const` receiver.



it does not guarantee that somebody else doesn't have a
mutable reference to the same data.  For the latter, you want immutable
instead of const.

So in this case, func receives a const reference to S, so it cannot
modify S. However, the delegate created by main() *can* modify the data,


This delegate is not accessible in `func`, only a `const` version.


because it holds a mutable reference to it. So when func invokes the
delegate, the delegate modifies the data thru its mutable reference.
...


`const` is supposed to be transitive, you can't have a `const` delegate 
that modifies data through 'its mutable reference'.



Had func been declared with an immutable parameter, it would have been a
different story, because you cannot pass a mutable argument to an
immutable parameter, so compilation would fail. Either s was declared
mutable and the delegate can modify it, but you wouldn't be able to pass
it to func(), or s was declared immutable and you can pass it to func(),
but the delegate creation would fail because it cannot modify immutable.

In a nutshell, `const` means "I cannot modify the data", whereas
`immutable` means "nobody can modify the data". Apparently small
difference, but actually very important.


T



This is a case of "I am modifying the data anyway, even though am 
`const`." Delegate contexts are not exempt from type checking. A `const` 
existential type is still `const`.


What the code is doing is basically the same as this:

```d
import std;

struct Updater{
string *context;
void opCall(string s){ *context=s; }
}

struct S{
string s;
Updater update;
}

void func(const S* s){
writeln(*s);
s.update("func");
writeln(*s);
}

void main(){
auto s = S("test");
s.update = Updater();

writeln(s);
func();
writeln(s);
}
```

It's a `const` hole, plain and simple.


Re: Lambdas Scope

2022-04-11 Thread Timon Gehr via Digitalmars-d-learn

On 11.04.22 11:11, Salih Dincer wrote:
How is this possible? Why is it compiled? Don't the same names in the 
same scope conflict?


```d
int function(int) square;
void main()
{
   square = (int a) => a * a;
   int square = 5.square;
   assert(square == 25);
}
```

Thanks, SDB@79


- Local variables in a function can hide variables in less nested 
scopes. (Such hiding is only an error for nested block scopes within the 
same function.)


- UFCS always looks up names in module scope, it does not even check locals.


Re: Any workaround for "closures are not yet supported in CTFE"?

2021-12-08 Thread Timon Gehr via Digitalmars-d-learn

On 12/8/21 9:07 AM, Petar Kirov [ZombineDev] wrote:

On Wednesday, 8 December 2021 at 07:55:55 UTC, Timon Gehr wrote:

On 08.12.21 03:05, Andrey Zherikov wrote:

On Tuesday, 7 December 2021 at 18:50:04 UTC, Ali Çehreli wrote:
I don't know whether the workaround works with your program but that 
delegate is the equivalent of the following struct (the struct 
should be faster because there is no dynamic context allocation). 
Note the type of 'dg' is changed accordingly:


The problem with struct-based solution is that I will likely be stuck 
with only one implementation of delegate (i.e. opCall 
implementation). Or I'll have to implement dispatching inside opCall 
based on some "enum" by myself which seems weird to me. Do I miss 
anything?


This seems to work, maybe it is closer to what you are looking for.

...


Incidentally, yesterday I played with a very similar solution. Here's my 
version:


https://run.dlang.io/gist/PetarKirov/f347e59552dd87c4c02d0ce87d0e9cdc?compiler=dmd 




```d
interface ICallable
{
     void opCall() const;
}

auto makeDelegate(alias fun, Args...)(auto ref Args args)
{
     return new class(args) ICallable
     {
     Args m_args;
     this(Args p_args) { m_args = p_args; }
     void opCall() const { fun(m_args); }
     };
}

alias Action = void delegate();

Action createDelegate(string s)
{
     import std.stdio;
     return !((string str) => writeln(str))(s).opCall;
}

struct A
{
     Action[] dg;
}

A create()
{
     A a;
     a.dg ~= createDelegate("hello");
     a.dg ~= createDelegate("buy");
     return a;
}

void main()
{
     enum a = create();
     foreach(dg; a.dg)
     dg();
}
```


Nice, so the error message is lying. This is a bit more complete:

```d
import std.stdio, std.traits, core.lifetime;
auto partiallyApply(alias fun,C...)(C context){
return  class(move(context)){
C context;
this(C context) { foreach(i,ref c;this.context) 
c=move(context[i]); }

auto opCall(ParameterTypeTuple!fun[context.length..$] args) {
return fun(context,forward!args);
}
}.opCall;
}

alias Action = void delegate();

Action createDelegate(string s){
import std.stdio;
return partiallyApply!((string str) => writeln(str))(s);
}

struct A{ Action[] dg; }

A create(){
A a;
a.dg ~= createDelegate("hello");
a.dg ~= createDelegate("buy");
return a;
}

void main(){
enum a = create();
foreach(dg; a.dg)
dg();
}

```


Re: Any workaround for "closures are not yet supported in CTFE"?

2021-12-08 Thread Timon Gehr via Digitalmars-d-learn

On 08.12.21 03:05, Andrey Zherikov wrote:

On Tuesday, 7 December 2021 at 18:50:04 UTC, Ali Çehreli wrote:
I don't know whether the workaround works with your program but that 
delegate is the equivalent of the following struct (the struct should 
be faster because there is no dynamic context allocation). Note the 
type of 'dg' is changed accordingly:


The problem with struct-based solution is that I will likely be stuck 
with only one implementation of delegate (i.e. opCall implementation). 
Or I'll have to implement dispatching inside opCall based on some "enum" 
by myself which seems weird to me. Do I miss anything?


This seems to work, maybe it is closer to what you are looking for.

```d
import std.stdio, std.traits, core.lifetime;

struct CtDelegate(R,T...){
void* ctx;
R function(T,void*) fp;
R delegate(T) get(){
R delegate(T) dg;
dg.ptr=ctx;
dg.funcptr=cast(typeof(dg.funcptr))fp;
return dg;
}
alias get this;
this(void* ctx,R function(T,void*) fp){ this.ctx=ctx; this.fp=fp; }
R opCall(T args){ return fp(args,ctx); }
}

auto makeCtDelegate(alias f,C)(C ctx){
static struct Ctx{ C ctx; }
return 
CtDelegate!(ReturnType!(typeof(f)),ParameterTypeTuple!f[0..$-1])(new 
Ctx(forward!ctx),
  (ParameterTypeTuple!f[0..$-1] args,void* ctx){ auto 
r=cast(Ctx*)ctx; return f(r.ctx,forward!args); });

}

struct A{
CtDelegate!void[] dg;
}

auto createDelegate(string s){
return makeCtDelegate!((string s){ s.writeln; })(s);
}

A create(){
A a;
a.dg ~= createDelegate("hello");
a.dg ~= createDelegate("buy");
return a;
}


void main(){
static a = create();
foreach(dg; a.dg)
dg();
}

```


Re: Beta 2.098.0

2021-10-19 Thread Timon Gehr via Digitalmars-d-announce

On 11.10.21 03:08, Paul Backus wrote:

On Monday, 11 October 2021 at 00:34:28 UTC, Mike Parker wrote:

On Sunday, 10 October 2021 at 23:36:56 UTC, surlymoor wrote:


Meanwhile @live is in the language, and it's half-baked. Then there's 
preview switches that will linger on into perpetuity; DIPs' 
implementations that haven't been finished.


And Walter has prioritized this issue over those. No matter what he 
works on, people will moan that he isn’t working on something else. 
Maybe we should clone him.


Perhaps worth asking why Walter, specifically, is required to work on 
@live in order for it to make progress. Is it just because no one else 
is willing to step up to the plate, or is he the only person 
qualified/capable enough?


I think @live is a dead end and any further work on it is probably 
wasted unless the code is reusable for some other feature. Ownership is 
a property of values, not of functions operating on those values. In 
particular, prioritizing ImportC over @live is the right call. ImportC 
is high-impact and Walter has a lot of relevant expertise.


Re: Please Congratulate My New Assistant

2021-01-25 Thread Timon Gehr via Digitalmars-d-announce

On 25.01.21 21:03, Paul Backus wrote:

On Monday, 25 January 2021 at 12:48:48 UTC, Imperatorn wrote:

But, at the same time, I guess it could be a bit demoralizing you know?


That's true.


I beg to differ. Open issues are not demoralizing.

Sometimes, reality is demoralizing. That doesn't mean we 
should hide our heads in the sand and ignore it.


What's demoralizing about this exchange is that it seems to imply there 
are people around who have nothing better to do than waiting for you to 
die so they can close your issues in the issue tracker. Apparently they 
will even feel like they are doing a good thing as they destroy your 
legacy. :(


Re: Please Congratulate My New Assistant

2021-01-25 Thread Timon Gehr via Digitalmars-d-announce

On 25.01.21 13:48, Imperatorn wrote:

On Monday, 25 January 2021 at 10:39:14 UTC, Walter Bright wrote:

On 1/24/2021 10:46 PM, Imperatorn wrote:
Imo it's reasonable to close or archive issues that are older than 10 
years.


We are not going to do that just because they are old.

If a bug still exists in the current DMD, the bug report stays open.


I can understand why, I really do.
...


Great, case closed.


Re: Please Congratulate My New Assistant

2021-01-25 Thread Timon Gehr via Digitalmars-d-announce

On 25.01.21 11:05, Imperatorn wrote:

On Monday, 25 January 2021 at 06:59:00 UTC, ag0aep6g wrote:

On 25.01.21 07:46, Imperatorn wrote:

Proposed solution:
Archive issues older than 10 years (and maybe some critera based on 
latest updated). If they are relevant, it's the authors 
responsibility to update the issue so that it's reproducible in the 
latest release.

#reasonablebutcontroversial


Just no. Reproducibility is a criterion. Age isn't.


Sure, but how do you define it?


If you need reproducibility to be defined, please stay away from the 
issue tracker.


Re: Please Congratulate My New Assistant

2021-01-24 Thread Timon Gehr via Digitalmars-d-announce

On 24.01.21 14:00, Max Haughton wrote:

On Sunday, 24 January 2021 at 12:36:16 UTC, Timon Gehr wrote:

On 18.01.21 10:21, Mike Parker wrote:
Thanks once more to Symmetry Investments, we have a new paid staffer 
in the D Language Foundation family.


Though I call him my "assistant", I can already see he will be more 
than that. He'll be taking some things off my shoulders, sure, but he 
also has ideas of his own to bring into the mix. Adding him to the 
team is certain to be a boon for the D community.


So, a word of warning to those of you who haven't heard from me in a 
while pestering you for blog posts: get used to the name "Max Haughton".


And congratulate him while you're at it!


Congratulations. However, Max seems to be just closing all enhancement 
requests on bugzilla as invalid. This is the behavior of a vandal. 
Please stop. Any policy that requires this is ill-advised. Issues are 
valuable and binning them like this is disrespectful to the time of 
the reporters.


I was going through trying to close things that are either not bugs 
anymore because they haven't been touched from 2010 and they've been 
fixed by entropy,


I can get behind this. You closed one of my issues that was fixed this 
way, but I don't usually report INVALID issues, this is why there is a 
WORKSFORME category.


or language changes which will never be looked at 
again because they aren't DIPs.


Of course they won't be looked at again if you claim they are invalid 
just by virtue of being enhancement requests. Obviously you looked at 
them now, so your reasoning here makes no sense. This is why there is an 
enhancement request category in the first place. They are not invalid 
issues, they are enhancement request issues.


They're still in public record but 
fundamentally they're just not useful anymore


Issues are not useful anymore when they are fixed or there is a good 
reason why they should not be fixed.


- I was literally just 
going through bugs FILO and trying to either reproduce or at least 
characterise whether they even can be acted on by the foundation.

...
Why does it seem like people who are hired to help improve D instead 
always start closing bugzilla issues without actually fixing them? This 
is meaningless optimization of indicators that don't even mean what you 
seem to think they mean. It's a waste of time and resources.


It's entirely possible I was overzealous and if I was, obviously reopen 
them


I don't have time for that, I don't get notified for all of them, just 
the ones I reported or interacted with. I have no idea what other 
potentially valuable enhancement requests you closed with a 
condescending "INVALID" verdict just because they were enhancement requests.


Please reopen all enhancement requests that you closed even though they 
remain unfixed.


but ultimately the enhancements have to go through a DIP because 
it's not 2012 anymore.

...


That does not change what those enhancement requests are for, it just 
makes it a bit harder to fix them. Obviously, nowadays the proper way to 
get rid of enhancement requests is by pushing them through the DIP 
process (or perhaps just making a good point to the reporter why it 
would be a bad idea to implement them), but of course, that requires 
more work than a couple of clicks and button presses. Closing as invalid 
because it is an enhancement request is not a valid way to get rid of 
enhancement requests.


If you really want to enact a policy that new enhancement requests 
should be illegal, I guess DLF can do that even though it is obviously a 
stupid idea (a DIP is a lot more formal, a large bar to overcome, so you 
will lose a lot of ideas), but how about you at least don't close issues 
that were made at a time when this was the officially encouraged way to 
track ideas? IMNSHO it should stay this way, there is no reason to 
dislike enhancement requests. They don't have the same purpose as DIPs 
(and DIPs are sometimes even necessary to fix issues that are not 
enhancement requests, for example type system unsoundness).


I also updated Stephen S's shared-delegate race condition bug to have a 
test case that actually compiles, and that's from 2010 - theadsan 
catches it now although it doesn't work with @safe either so I'm not 
sure whether we should be embarrassed or not.




There is certainly useful work to be done in the issue tracker. I am 
here objecting to certain systematic destructive practices that do not 
even have any upside. I wish this kind of behavior would stop forever. 
You are not the first person to engage into careless issue closing 
sprees. I think the underlying issue is a bad understanding of the value 
of issues in the issue tracker and some sort of irrational assignment of 
cost to open issues. Walter always says: Put this in bugzilla, it will 
get lost on the forums, and he is right.


Re: Please Congratulate My New Assistant

2021-01-24 Thread Timon Gehr via Digitalmars-d-announce

On 18.01.21 10:21, Mike Parker wrote:
Thanks once more to Symmetry Investments, we have a new paid staffer in 
the D Language Foundation family.


Though I call him my "assistant", I can already see he will be more than 
that. He'll be taking some things off my shoulders, sure, but he also 
has ideas of his own to bring into the mix. Adding him to the team is 
certain to be a boon for the D community.


So, a word of warning to those of you who haven't heard from me in a 
while pestering you for blog posts: get used to the name "Max Haughton".


And congratulate him while you're at it!


Congratulations. However, Max seems to be just closing all enhancement 
requests on bugzilla as invalid. This is the behavior of a vandal. 
Please stop. Any policy that requires this is ill-advised. Issues are 
valuable and binning them like this is disrespectful to the time of the 
reporters.


Re: Printing shortest decimal form of floating point number with Mir

2021-01-06 Thread Timon Gehr via Digitalmars-d-announce

On 06.01.21 07:50, Walter Bright wrote:


 > I want to execute the code that I wrote, not what you think I should 
have

 > instead written, because sometimes you will be wrong.

With programming languages, it does not matter what you think you wrote. 
What matters is how the language semantics are defined to work.
The language semantics right now are defined to not work, so people are 
going to rely on the common sense and/or additional promises of specific 
backend authors. People are going to prefer that route to the 
alternative of forking every dependency and adding explicit rounding to 
every single floating-point operation. (Which most likely does not even 
solve the problem as you'd still get double-rounding issues.)


In writing professional numerical code, one must carefully understand it, 
knowing that it does *not* work like 7th grade algebra.


That's why it's important to have precise control. Besides, a lot of 
contemporary applications of floating-point computations are not your 
traditional numerically stable fixed-point iterations. Reproducibility 
even of explicitly chaotic behavior is sometimes a big deal, for example 
for artificial intelligence research or multiplayer games.
Also, maybe you don't want your code to change behavior randomly between 
compiler updates. Some applications need to have a certain amount of 
backwards compatibility.



Different languages can and do behave differently, too.
Or different implementations. I'm not going to switch languages due to 
an issue that's fixed by not using DMD.


Re: Printing shortest decimal form of floating point number with Mir

2021-01-05 Thread Timon Gehr via Digitalmars-d-announce

On 06.01.21 03:27, Walter Bright wrote:

On 1/5/2021 5:30 AM, Guillaume Piolat wrote:
It would be nice if no excess precision was ever used. It can 
sometimes gives a false sense of correctness. It has no upside except 
accidental correctness that can break when compiled for a different 
platform.


That same argument could be use to always use float instead of double. I 
hope you see it's fallacious 

...


Evidence that supports some proposition may well fail to support a 
completely different proposition.


An analogy for your exchange:

G: Birds can fly because they have wings.
W: That same argument could be used to show mice can fly. I hope you see 
it's fallacious 



Anyway, I wouldn't necessarily say occasional accidental correctness is 
the only upside, you also get better performance and simpler code 
generation on the deprecated x87. I don't see any further upsides 
though, and for me, it's a terrible trade-off, because possibility of 
incorrectness and lack of portability are among the downsides.


I want to execute the code that I wrote, not what you think I should 
have instead written, because sometimes you will be wrong. There are 
algorithms in Phobos that can break when certain operations are computed 
at a higher precision than specified. Higher does not mean better; not 
all adjectives specify locations on some good/bad axis.


Re: Printing shortest decimal form of floating point number with Mir

2020-12-23 Thread Timon Gehr via Digitalmars-d-announce

On 23.12.20 16:37, Ola Fosheim Grøstad wrote:

On Wednesday, 23 December 2020 at 03:06:51 UTC, 9il wrote:
You, Andrey, and Atila don't care about language features that have 
been requested for Mir or even more: rejecting DIP draft + DMD partial 
implementation for no real reason.


Out of curiosity, which language features would improve Mir?


https://github.com/dlang/DIPs/blob/master/DIPs/other/DIP1023.md

https://forum.dlang.org/post/kvcrsoqozrflxibgx...@forum.dlang.org

https://forum.dlang.org/thread/gungkvmtrkzcahhij...@forum.dlang.org?page=1

https://forum.dlang.org/post/jwtygeybvfgbosxsb...@forum.dlang.org


Re: static foreach over constant range in @nogc block

2020-10-03 Thread Timon Gehr via Digitalmars-d-learn

On 03.10.20 13:18, tspike wrote:
I came across an issue recently that I’m a little confused by. The 
following program fails to compile under LDC and DMD, though it compiles 
fine under GDC:


     @nogc:

     void main()
     {
     static foreach(i; 0 .. 4)
     {
     pragma(msg, i);
     }
     }

Both DMD and LDC report the following error if I try to compile it:

     test.d(7): Error: cannot use operator ~= in @nogc delegate 
test.main.__lambda1


I was just wondering, is this is a compiler bug or is there a reason I'm 
overlooking preventing this sort of code from compiling?


It's a compiler bug, the same as this one:

@nogc:
void main(){
static immutable x = { int[] a; a~=1; return a; }();
}


Re: This Right In: PLDI 2020 will take place online and registration is FREE. Closes on Jun 5, so hurry!

2020-06-16 Thread Timon Gehr via Digitalmars-d-announce

On 16.06.20 17:35, Robert M. Münch wrote:

On 2020-06-15 13:01:02 +, Timon Gehr said:


The talk will be on YouTube.


Great.


Papers:
https://www.sri.inf.ethz.ch/publications/bichsel2020silq
https://www.sri.inf.ethz.ch/publications/gehr2020lpsi

Source code:
https://github.com/eth-sri/silq
https://github.com/eth-sri/psi/tree/new-types


Thanks, somehow missed these.
...


I think they were not online when you asked (neither were the versions 
in ACM DL).


What's the main difference of your approach WRT something like this: 
http://pyro.ai/

...


Pyro is a Python library/EDSL, while PSI is a typed programming language 
(with some support for dependent typing).


Pyro's focus is on scalable machine learning. PSI alone would not be 
particularly helpful there.


Pyro fits a parameterized probabilistic model to data using maximum 
likelihood estimation while at the same time inferring a posterior 
distribution for the latent variables of the model. If you use a 
probabilistic model without parameters, Pyro can be used for plain 
probabilistic inference without maximum likelihood estimation.


PSI currently does not do optimization, just probabilistic inference. 
(PSI can do symbolic inference with parameters, then they can be 
optimized with some other tool.)


The goal is to find a distribution such that KL-divergence of the 
posterior and this distribution is as small as possible. PSI always 
finds the true posterior when it is successful (i.e. KL-divergence 0 
when applicable), but will not always succeed, in particular, it might 
not be fast enough, or the result may not be in a useful form.


Pyro produces best-effort results. You may have to use some sort of 
validation to make sure that results are useful.


- The posterior distribution is assumed to have a specific form that can 
be represented symbolically and is normalized by construction. Often, 
the true posterior is not actually (known to be) in that family.


- The KL-divergence is upper-bounded using ELBO (evidence lower bound).

- The (gradient of the) ELBO is approximated by sampling from the 
assumed posterior with current parameters.


- This approximate ELBO is approximately optimized using gradient descent.

Also see: https://pyro.ai/examples/svi_part_i.html


BTW: I'm located in Zug... so not far away from you guys.





Re: Interesting work on packing tuple layout

2020-06-15 Thread Timon Gehr via Digitalmars-d-announce

On 15.06.20 16:03, Max Samukha wrote:

On Monday, 15 June 2020 at 13:57:01 UTC, Max Samukha wrote:


void main() {
    Tuple!(byte, int, short) t;
    writeln(t[0]);
}


test.d(57,23): Error: need `this` for `__value_field_2` of type `byte`


It should work. This works:

void main() {
    Tuple!(byte, int, short) t;

    t[0] = 0;
    t[1] = 2;
    t[2] = 3;

    auto a0 = t[0];
    auto a1 = t[1];
}
}


I cannot reproduce the error. writeln(t[0]) works here: 
https://run.dlang.io/is/kz6lFc




Apparently, it has been fixed in 2.092. Nice!


Re: This Right In: PLDI 2020 will take place online and registration is FREE. Closes on Jun 5, so hurry!

2020-06-15 Thread Timon Gehr via Digitalmars-d-announce

On 15.06.20 09:46, M.M. wrote:

On Sunday, 14 June 2020 at 20:22:41 UTC, Timon Gehr wrote:


For PLDI 2020, I have contributed to the following research papers:

https://pldi20.sigplan.org/details/pldi-2020-papers/47/Silq-A-High-Level-Quantum-Language-with-Safe-Uncomputation-and-Intuitive-Semantics 



https://pldi20.sigplan.org/details/pldi-2020-papers/46/-PSI-Exact-Inference-for-Higher-Order-Probabilistic-Programs 



Congratulations.
...


Thanks!

The only relation to D is that the implementations of the two 
presented programming languages are written in D.


Does that mean that your junior co-author(s) use D as well?



Occasionally.


Re: This Right In: PLDI 2020 will take place online and registration is FREE. Closes on Jun 5, so hurry!

2020-06-15 Thread Timon Gehr via Digitalmars-d-announce

On 15.06.20 08:58, Robert M. Münch wrote:

On 2020-06-14 20:22:41 +, Timon Gehr said:

https://pldi20.sigplan.org/details/pldi-2020-papers/46/-PSI-Exact-Inference-for-Higher-Order-Probabilistic-Programs 



This one sounds pretty interesting. Will there be a recording and a 
published paper be available?




The talk will be on YouTube.

Papers:
https://www.sri.inf.ethz.ch/publications/bichsel2020silq
https://www.sri.inf.ethz.ch/publications/gehr2020lpsi

Source code:
https://github.com/eth-sri/silq
https://github.com/eth-sri/psi/tree/new-types


Re: This Right In: PLDI 2020 will take place online and registration is FREE. Closes on Jun 5, so hurry!

2020-06-14 Thread Timon Gehr via Digitalmars-d-announce

On 04.06.20 14:46, Andrei Alexandrescu wrote:
PLDI (Programming Language Design and Implementation) is a top academic 
conference. This year PLDI will be held online and registration is free. 
This is an amazing treat.


https://conf.researchr.org/home/pldi-2020

Workshops and tutorials (also free) are of potential interest. These 
caught my eye:


https://pldi20.sigplan.org/home/SOAP-2020 (on the 15th)
https://conf.researchr.org/track/ismm-2020/ismm-2020 (on the 16th)


For PLDI 2020, I have contributed to the following research papers:

https://pldi20.sigplan.org/details/pldi-2020-papers/47/Silq-A-High-Level-Quantum-Language-with-Safe-Uncomputation-and-Intuitive-Semantics

https://pldi20.sigplan.org/details/pldi-2020-papers/46/-PSI-Exact-Inference-for-Higher-Order-Probabilistic-Programs

The only relation to D is that the implementations of the two presented 
programming languages are written in D.


Re: Interesting work on packing tuple layout

2020-06-14 Thread Timon Gehr via Digitalmars-d-announce

On 14.06.20 20:25, Paul Backus wrote:

On Sunday, 14 June 2020 at 16:26:17 UTC, Avrina wrote:


The situation also applies to the only tuple implementation in D. If 
you are proposing a new type with emphasis on reducing the footprint 
of the tuple then I don't see a problem with that. Changing the 
existing tuple implementation would be problematic.


Presumably any such change would be made backwards-compatible. So 
Tuple.opIndex and Tuple.expand would still return elements in the order 
specified by the user, even if that order is different from the internal 
storage order.


Indeed, that's why I noted that the obvious way to achieve that does not 
work. Although some assumptions will break, for example, there might be 
code that assumes that tupleof does the same thing as expand.


I was thinking about e.g., manual cache optimization, but reducing size 
in the common case where such considerations are not made may well be 
more important. If it can be done at all; I am not currently aware of a 
workaround.


Re: Interesting work on packing tuple layout

2020-06-13 Thread Timon Gehr via Digitalmars-d-announce

On 13.06.20 21:11, Andrei Alexandrescu wrote:

https://github.com/ZigaSajovic/optimizing-the-memory-layout-of-std-tuple

Would be interesting to adapt it for std.tuple.



That's likely to run into the following arbitrary language limitation:

---
alias Seq(T...)=T;
struct T{
int a,b;
alias expand=Seq!(b,a);
}
void main(){
import std.stdio;
writeln(T(1,2).expand);
}
---
Error: need `this` for `b` of type `int`
Error: need `this` for `a` of type `int`
---

Another question is if automatic packing is worth making the layout 
harder to predict.


Re: DIP 1028 "Make @safe the Default" is dead

2020-05-29 Thread Timon Gehr via Digitalmars-d-announce

On 29.05.20 06:53, Walter Bright wrote:
The subject says it all. 


Thanks! For the record, this would have been my preference:

fix @safe, @safe by default >
  fix @safe, @system by default >
don't fix @safe, @system by default >
  don't fix @safe, @safe by default

While this retraction improves matters in the short term, I think there 
is still potential for improvement. In particular, `@safe` is still 
broken for function prototypes.



I recommending adding `safe:` as the first line in all your project modules


It would be great if `@safe:` did not affect declarations that would 
otherwise infer annotations.


Re: Rationale for accepting DIP 1028 as is

2020-05-28 Thread Timon Gehr via Digitalmars-d-announce

On 28.05.20 10:50, Daniel Kozak wrote:

He seems to think
that weakening @safe is worth doing, because it will ultimately mean that
more code will be treated as @safe and mechnically checked by the compiler,

And I believe he is right.



No, it's a false dichotomy. Weakening @safe to allow more code to be 
@safe might have been sensible if there was no @trusted annotation. 
However, as things stand, @trusted is sufficient as a tool to introduce 
potentially wrong assumptions about memory safety, we don't need more, 
especially not implicit ones.


The reason why people are not using @safe is partly that it is not the 
default, but it is mostly that their library dependencies _including 
Phobos_ are not properly annotated. This needs actual work to fix.


If there is significant perceived value in performing @safety checks in 
@system code, we can add a new function attribute that causes 
non-transitive @safe checks but otherwise gets treated as @system. @safe 
does not have to take this role.


Re: Rationale for accepting DIP 1028 as is

2020-05-27 Thread Timon Gehr via Digitalmars-d-announce

On 27.05.20 12:51, Walter Bright wrote:

On 5/27/2020 3:01 AM, Timon Gehr wrote:
This is clearly not possible, exactly because the old @safe rules are 
stronger. 


Thank you. We can agree on something.
...


I am not sure if you noticed that I agree with most of your points, just 
not about their relevance to the topic at hand.



But why exactly should API breakage not count?


I've addressed exactly this a dozen times or more, to you


No. You did not. I went over all of your responses to my posts again to 
make sure. Why are you making this claim?


I haven't made API breakage a central point to any of my previous posts 
and you did not address any of my criticism in any depth. As far as I 
can tell, the only point you engaged with was that @trusted is not 
greenwashing.


and others. 


I don't think you did, but I am not going to check.


Repeating myself has become pointless.

It's fine to disagree with me. Argue that point. But don't say I didn't 
address it.


As far as I remember, you did not address this specific point, but if I 
had to extrapolate your response from previous points you made I would 
expect your opinion to be that implicitly broken APIs should be fixed by 
universal manual review of not explicitly annotated functions and that 
this is better than the compiler catching it for you because this way 
only people who are competent to judge which annotation should be there 
will notice that it is missing.


Re: Rationale for accepting DIP 1028 as is

2020-05-27 Thread Timon Gehr via Digitalmars-d-announce

On 27.05.20 07:54, Walter Bright wrote:

On 5/26/2020 1:32 PM, Paul Backus wrote:
The reason extern function declarations are particularly problematic 
is that changing them from @system-by-default to @safe-by-default can 
cause *silent* breakage in existing, correct code.


Can you post an example of currently compiling and correctly working 
code that will break?


Setting aside use of __traits(compiles, ...).


What exactly is your standard here? Are you saying we have to produce 
the following?


- Monolithic example, API breakage does not count.

- No __traits(compiles, ...)

- The code has to compile under both old and new rules.

- The code has to corrupt memory in @safe code under new rules.

This is clearly not possible, exactly because the old @safe rules are 
stronger. But why exactly should API breakage not count?


Re: Rationale for accepting DIP 1028 as is

2020-05-27 Thread Timon Gehr via Digitalmars-d-announce

On 27.05.20 11:34, Bastiaan Veelo wrote:

On Wednesday, 27 May 2020 at 09:09:58 UTC, Walter Bright wrote:

On 5/26/2020 11:20 PM, Bruce Carneal wrote:

I'm not at all concerned with legacy non-compiling code of this nature.


Apparently you agree it is not an actual problem.


Really? I don't know if you really missed the point being made, or 
you're being provocative. Both seem unlikely to me.


-- Bastiaan.


It's just selective reading and confirmation bias.

Walter did not read past the quoted sentence as it successfully 
slaughters the straw man he set up in his previous post.


Re: DIP1028 - Rationale for accepting as is

2020-05-26 Thread Timon Gehr via Digitalmars-d-announce

On 26.05.20 13:09, Atila Neves wrote:

On Monday, 25 May 2020 at 17:01:24 UTC, Panke wrote:

On Monday, 25 May 2020 at 16:29:24 UTC, Atila Neves wrote:
A few years ago I submitted several PRs to Phobos to mark all 
unittests that could with @safe explicitly. I'd say that was a good 
example of nobody reviewing them for their @systemness.


Ideally you should be able to blindly mark every function definition 
with @safe, because the compiler will catch you if you fall. Only if 
you type @trusted you should need to be careful.


Doesn't work for templated functions since their @safety might depend on 
the the particular instantiation (consider std.algorithm.map).


I think the point was that annotating with @safe liberally should not 
enable memory corruption.


Re: DIP1028 - Rationale for accepting as is

2020-05-25 Thread Timon Gehr via Digitalmars-d-announce

On 25.05.20 14:22, Johannes T wrote:

On Monday, 25 May 2020 at 11:52:27 UTC, Timon Gehr wrote:

On 25.05.20 11:25, Johannes T wrote:

@trusted can't be trusted


That's the point of @trusted. ._.

The code is trusted by the programmer, not the annotation by the 
compiler.


Sorry, I phrased it poorly. I meant @trusted would be used more 
frequently. The focus would spread and lead to less rigorous checks.


This is just not true. If the compiler forces you to decide to put 
either @trusted or @system, then you will not get more usage of @trusted 
than if it implicitly decides for you that what you want is implicit 
@trusted.


Re: DIP1028 - Rationale for accepting as is

2020-05-25 Thread Timon Gehr via Digitalmars-d-announce

On 25.05.20 11:25, Johannes T wrote:

@trusted can't be trusted


That's the point of @trusted. ._.

The code is trusted by the programmer, not the annotation by the compiler.


Re: DIP1028 - Rationale for accepting as is

2020-05-24 Thread Timon Gehr via Digitalmars-d-announce

On 24.05.20 11:10, Walter Bright wrote:

On 5/23/2020 11:26 PM, Bruce Carneal wrote:
I don't believe that you or any other competent programmer greenwashes 
safety critical code.  Regardless, the safety conscious must review 
their dependencies whatever default applies.


That's the theory. But we do, for various reasons. I've seen it a lot 
over the years, at all levels of programming ability. It particularly 
happens when someone needs to get the code compiling and running, and 
the error message is perceived as a nuisance getting in the way.


We should be very careful about adding nuisances to the language that 
make it easier to greenwash than to do the job correctly.


Implicit greenwashing by the compiler is a nuisance that makes it harder 
to do the job correctly and easier to do the wrong thing.


Re: DIP1028 - Rationale for accepting as is

2020-05-24 Thread Timon Gehr via Digitalmars-d-announce

On 24.05.20 10:55, Walter Bright wrote:
I infer your position is the idea that putting @trusted on the 
declarations isn't greenwashing, while @safe is.

...


It's only greenwashing if it's misleading. Putting @safe is a lie, 
putting @trusted is honest.



I can't see a practical difference between:

@safe extern (C) void whatevs(parameters);
@trusted extern (C) void whatevs(parameters);

Both require that whatevs() provide a safe interface. The difference 
between them is in the implementation of those functions, not the 
interface. Since the D compiler cannot see those implementations, they 
are immaterial to the compiler and user.


Sure, that's the point. Your @safe by default DIP in practice makes 
certain declarations @trusted by default. @safe is a fine default. 
@trusted is a horrible default. That's why your DIP claims it is for 
@safe by default (and not @trusted by default). Except in this one weird 
special case, where it introduces @trusted by default.


Re: DIP1028 - Rationale for accepting as is

2020-05-23 Thread Timon Gehr via Digitalmars-d-announce

On 24.05.20 05:28, Walter Bright wrote:

I'd like to emphasize:



I understand all of those points and most of them are true, and obvious.

The issue is that they are not a justification for the decision. You 
seem to think that greenwashing is not greenwashing when it is done by 
the compiler without user interaction. Why is that?


1. It is not possible for the compiler to check any declarations where 
the implementation is not available. Not in D, not in any language. 
Declaring a declaration safe does not make it safe.

...


Which is exactly why it should not be possible to declare it @safe.

2. If un-annotated declarations cause a compile time error, it is highly 
likely the programmer will resort to "greenwashing" - just slapping 
@safe on it. I've greenwashed code. Atila has. Bruce Eckel has. We've 
all done it. Sometimes even for good reasons.

...


Slapping @safe on it should not even compile. You should slap either 
@system or @trusted on it.



3. Un-annotated declarations are easily detectable in a code review.
...


It's easier to find something that is there than something that is not 
there. Greenwashing is not easier to detect if the compiler did it 
implicitly.



4. Greenwashing is not easily detectable in a code review.
...


Even though it is easy to miss in a code review, it's easy to detect 
automatically. Any extern(C) prototype that is annotated @safe 
(explicitly or implicitly) is greenwashed.



5. Greenwashing doesn't fix anything. The code is not safer.


Actually further down you say that it makes the code safer in a 
"not-at-all obvious way". Which is it?



It's an illusion, not a guarantee.
...


Yes. On the other hand, @trusted is not an illusion, it is a way to 
clarify responsibilities.


6. If someone cares to annotate declarations, it means he has at least 
thought about it, because he doesn't need to.


True, but this is an argument against restrictive defaults in general, 
in particular @safe by default. Also note that if someone cares to 
annotate declarations, the compiler pointing out missing annotations 
that would otherwise cause implicit greenwashing is _useful_.



Hence it's more likely to be correct than when greenwashed.
...


This is true whether or not the compiler does the greenwashing 
implicitly. Annotating with @safe is a lie, whether the compiler does it 
or the programmer. It should be rejected and force @system or @trusted. 
You can still quickly see a difference in applied care by checking 
whether it's a single @trusted: or each prototype is annotated individually.



7. D should *not* make it worthwhile for people to greenwash code.
...


Greenwashing automatically is not a solution, it's admitting defeat. Why 
can't the compiler just reject greenwashing with @safe?
Slapping @trusted on prototypes is not greenwashing, it's saying "I take 
responsibility for the memory safety of this external C code".


It is, in a not-at-all obvious way, safer for C declarations to default 
to being safe.


@safe is advertised to give mechanical guarantees, where @trusted is a 
way for programmers to take responsibility for parts of the code. It is 
not advertised to be an unsound linter with pseudo-pragmatic trade-offs 
and implicit false negatives.


Re: DIP1028 - Rationale for accepting as is

2020-05-22 Thread Timon Gehr via Digitalmars-d-announce

On 22.05.20 18:43, Adam D. Ruppe wrote:

On Friday, 22 May 2020 at 16:39:42 UTC, jmh530 wrote:
Fortunately, the above point can be more easily fixed by making `free` 
@system


With the o/b system `free` might actually work out OK


free(new int);


Re: DIP1028 - Rationale for accepting as is

2020-05-22 Thread Timon Gehr via Digitalmars-d-announce

On 22.05.20 16:49, bachmeier wrote:

On Friday, 22 May 2020 at 14:38:09 UTC, Timon Gehr wrote:

On 22.05.20 15:58, bachmeier wrote:

...

Honest question: What is the use case for an 
absolutely-positively-has-to-be-safe program that calls C code? Why 
would anyone ever do that? C is not and will never be a safe 
language. "Someone looked at that blob of horrendous C code and 
thinks it's safe" does not inspire confidence. Why not rewrite the 
code in D (or Rust or Haskell or whatever) if safety is that critical?


Honesty is what's critical. The annotations should mean what they are 
advertised to mean and making those meanings as simple as possible 
makes them easier to explain. As things stand, @safe can mean that 
someone accidentally or historically did not annotate an extern(C) 
prototype and an unsafe API a few calls up was ultimately exposed with 
the @safe attribute because the compiler never complained.


In my opinion, the only advantage of @safe is that the compiler has 
checked the code and determines that it only does things considered 
safe.


Wrong, but let's roll with that.

I don't see that marking an extern(C) function @trusted buys you 
anything, at least not until you can provide a compiler guarantee for 
arbitrary C code.


It buys you the ability to call that function from @safe code. Clearly 
you can't mark it @safe because the compiler has not checked it.


Re: DIP1028 - Rationale for accepting as is

2020-05-22 Thread Timon Gehr via Digitalmars-d-announce

On 22.05.20 15:58, bachmeier wrote:

...

Honest question: What is the use case for an 
absolutely-positively-has-to-be-safe program that calls C code? Why 
would anyone ever do that? C is not and will never be a safe language. 
"Someone looked at that blob of horrendous C code and thinks it's safe" 
does not inspire confidence. Why not rewrite the code in D (or Rust or 
Haskell or whatever) if safety is that critical?


Honesty is what's critical. The annotations should mean what they are 
advertised to mean and making those meanings as simple as possible makes 
them easier to explain. As things stand, @safe can mean that someone 
accidentally or historically did not annotate an extern(C) prototype and 
an unsafe API a few calls up was ultimately exposed with the @safe 
attribute because the compiler never complained.


How would you feel if you never intended to expose a @safe interface, 
but someone imported your library after determining it contained no 
@trusted annotations (following the advice of articles on @safe), relied 
on the @safe annotation and then had weird sporadic memory corruption 
issues in production that took them months to ultimately trace back to 
your library? Would you feel responsible or would you rather put the 
blame on Walter?


Re: DIP1028 - Rationale for accepting as is

2020-05-22 Thread Timon Gehr via Digitalmars-d-announce

On 22.05.20 03:22, Walter Bright wrote:


This is Obviously A Good Idea. Why would I oppose it?

1. I've been hittin' the crack pipe again.
2. I was secretly convinced, but wanted to save face.
3. I make decisions based on consultation with my astrologer.
4. I am evil.


5. You are backwards-rationalizing a wrong intuition that is based on 
experiences that are not actually analogous. You are ignoring feedback 
given by many people around you because that worked out well for you in 
the past.


I know that you had many interactions with large groups of ignorant 
people who thought that you would never be able to pull off a certain 
thing. This is not one of those cases. I understand the appeal, but the 
backlash really should not encourage you to soldier on this time.




1. Go through 200 functions in clibrary.d and determine which are @safe
and which are @system. This is what we want them to do. We try to motivate
this with compiler error messages. Unfortunately, this is both tedious and
thoroughly impractical, as our poor user Will Not Know which are safe and
which are system. We can correctly annotate core.stdc.stdio because I know
those functions intimately. This is not true for other system C APIs, and
even less true for some third party C library we're trying to interface to.

2. Annotate useClibrary() as @trusted or @system. While easier,


First do 2, then, over time, do 1. If having the @safe tag and no 
@trusted code is important to you, aim to replace the C code with 
something you can automatically verify, by slowly porting it over to D.


this causes all benefits to @safe by default to be lost. 


Absolutely not. Greenwashing causes the benefits of certification to be 
lost. Honesty does not. The value of @safe code is what it is because 
there is code that can't be @safe.




4. Edit clibrary.d and make the first line:

     @safe:

I submit that, just like with Java, Option 4 is what people will reach for,
nearly every time. I've had some private conversations where people 
admitted

this was what they'd do. People who knew it was wrong to do that.
...


They should know to put @trusted instead of @safe, and the compiler 
should enforce it. Also, why do those people speak for everyone else? 
They don't speak for me.


If it's @safe by default, and then someone chooses to annotate it with 
@system
here and there, I'd feel a lot more confident about the accuracy of the 
code

annotations than if it just had @safe: at the top. At least they tried.
...


If it has @safe:/@trusted: at the top at least you know they were aware 
what they were doing.
Also, what about if it has @trusted: at the top and some @system 
annotations here and there? Did they not try?



What is actually accomplished with this amendment if it was implemented?

1. Adds a funky, special case rule. It's better to have simple, easily
understood rules than ones with special cases offering little improvement.
...


What about the funky special case rule that the compiler is responsible 
for memory safety of @safe code except in this one weird special case?



2. Existing, working code breaks.
...


Making @safe the default is bound to break code. It's bad enough that 
code will break. Avoiding part of that code breakage is no justification 
for breaking @safe.



3. The most likely code fixes are to just make it compile, absolutely
nothing safety-wise is improved. The added annotations will be a fraud.
...


It's vastly better to have some fraudulent annotations in some projects 
than a fraudulent compiler compiling all projects. Do you really want to 
put the responsibility for the memory safety of random C libraries on 
the compiler developers?



D should not encourage "greenwashing" practices like the Java
exception specification engendered.


So your argument is that you don't want D programmers to do have to do 
the dirty work of greenwashing. Therefore the compiler will implicitly 
greenwash for them? What about the programmers who actually want to do 
the right thing and don't want the compiler to implicitly greenwash C 
libraries for them?



The compiler cannot vet the accuracy
of bodyless C functions, and we'll just have to live with that. The 
proposed

amendment does not fix that.
...


@trusted is the fix.


And so, I did not incorporate the proposed amendment to the Safe by Default
DIP.


Which (so far) is a harmless mistake with an easy fix.


Re: @property with opCall

2020-03-09 Thread Timon Gehr via Digitalmars-d-learn

On 09.03.20 13:14, Adam D. Ruppe wrote:


Here's a wiki page referencing one of the 2013 discussions 
https://wiki.dlang.org/Property_Discussion_Wrap-up


https://wiki.dlang.org/DIP24


Re: static foreach / How to construct concatenated string?

2020-03-09 Thread Timon Gehr via Digitalmars-d-learn

On 07.03.20 17:41, MoonlightSentinel wrote:

On Saturday, 7 March 2020 at 16:30:59 UTC, Robert M. Münch wrote:

Is this possible at all?


You can use an anonymous lambda to build the string in CTFE:


It turns out that if you do use this standard idiom, you might end up 
getting blamed for an unrelated DMD bug though:

https://github.com/dlang/dmd/pull/9922

:o)


Re: Improving dot product for standard multidimensional D arrays

2020-03-04 Thread Timon Gehr via Digitalmars-d-learn

On 01.03.20 21:58, p.shkadzko wrote:


**
Matrix!T matrixDotProduct(T)(Matrix!T m1, Matrix!T m2)
in
{
     assert(m1.rows == m2.cols);


This asserts that the result is a square matrix. I think you want 
`m1.cols==m2.rows` instead.



}
do
{
     Matrix!T m3 = Matrix!T(m1.rows, m2.cols);

     for (int i; i < m1.rows; ++i)
     {
     for (int j; j < m2.cols; ++j)
     {
     for (int k; k < m2.rows; ++k)
     {
     m3.data[toIdx(m3, i, j)] += m1[i, k] * m2[k, j];
     }
     }
     }
     return m3;
}
**
...
I can see that accessing the appropriate array member in Matrix.data is 
costly due to toIdx operation but, I can hardly explain why it gets so 
much costly. Maybe there is a better way to do it after all?


Changing the order of the second and third loop probably goes a pretty 
long way in terms of cache efficiency:


Matrix!T matrixDotProduct(T)(Matrix!T m1,Matrix!T m2)in{
assert(m1.cols==m2.rows);
}do{
int m=m1.rows,n=m1.cols,p=m2.cols;
Matrix!T m3=Matrix!T(m,p);
foreach(i;0..m) foreach(j;0..n) foreach(k;0..p)
m3.data[i*p+k]+=m1.data[i*n+j]*m2.data[j*p+k];
return m3;
}


(untested.)


Re: How to invert bool false/true in alias compose?

2019-12-06 Thread Timon Gehr via Digitalmars-d-learn

On 07.12.19 05:00, Marcone wrote:

import std;

alias cmd = compose!(to!bool, wait, spawnShell, to!string);

void main(){
 writeln(cmd("where notepad.exe"));
}


Result:

C:\Windows\System32\notepad.exe
C:\Windows\notepad.exe
false


The result show "false" because good spawnshell command return 0 and 0 
to bool is false. But I want to invert false to true to get true or 
false if command success.


alias cmd = compose!(not!(to!bool), wait, spawnShell, to!string);


Re: Simple casting?

2019-11-27 Thread Timon Gehr via Digitalmars-d-learn

On 27.11.19 11:43, ixid wrote:

On Tuesday, 26 November 2019 at 16:33:06 UTC, Timon Gehr wrote:

import std;
void main(){
    int[] x=[1,1,2,3,4,4];
    int[][] y=x.chunkBy!((a,b)=>a==b).map!array.array;
    writeln(y);
}


This stuff is a nightmare for less experienced users like myself, I wish 
there were a single function that would make any data obkect eager, no 
matter how convoluted its arrays of arrays of arrays.


import std;

auto eager(T)(T r){
static if(isInputRange!T) return r.map!eager.array;
else return r;
}

void main(){
int[] x=[1,1,2,3,4,4];
int[][] y=x.chunkBy!((a,b)=>a==b).eager;
writeln(y);
}



Re: Simple casting?

2019-11-26 Thread Timon Gehr via Digitalmars-d-learn

On 26.11.19 23:08, Taylor R Hillegeist wrote:

On Tuesday, 26 November 2019 at 16:33:06 UTC, Timon Gehr wrote:

    int[][] y=x.chunkBy!((a,b)=>a==b).map!array.array;



how did you know to do that?



chunkBy with a binary predicate returns a range of ranges. So if I want 
an array of arrays I have to convert both the inner ranges and the outer 
range.


Re: Simple casting?

2019-11-26 Thread Timon Gehr via Digitalmars-d-learn

On 26.11.19 06:05, Taylor R Hillegeist wrote:

I'm attempting to do a segment group.

details:
alias ProbePoint[3]=triple;
triple[] irqSortedSet = UniqueTriples.keys
     .sort!("a[1].irqid < b[1].irqid",SwapStrategy.stable)
     .array;
83:triple[][] irqSortedSets = irqSortedSet.chunkBy!((a,b) => a[1].irqid 
== b[1].irqid);



GetAllTriplesExtractFileIrqSplit.d(83): Error: cannot implicitly convert 
expression `chunkBy(irqSortedSet)` of type `ChunkByImpl!(__lambda4, 
ProbePoint[3][])` to `ProbePoint[3][][]`


I have something that looks like a triple[][] but I can't seem to get 
that type out.
when I add .array it converts to a Group which doesn't make sense to me 
because I'm not using a unary comparison. Any thought?


import std;
void main(){
int[] x=[1,1,2,3,4,4];
int[][] y=x.chunkBy!((a,b)=>a==b).map!array.array;
writeln(y);
}


Re: why local variables cannot be ref?

2019-11-25 Thread Timon Gehr via Digitalmars-d-learn

On 25.11.19 10:00, Dukc wrote:

On Monday, 25 November 2019 at 03:07:08 UTC, Fanda Vacek wrote:

Is this preferred design pattern?

```
int main()
{
int a = 1;
//ref int b = a; // Error: variable `tst_ref.main.b` only 
parameters or `foreach` declarations can be `ref`

ref int b() { return a; }
b = 2;
assert(a == 2);
return 0;
}
```

Fanda


It's okay, but I'd prefer an alias, because your snippet uses the heap 
needlessly (it puts variable a into heap to make sure there will be no 
stack corruption if you pass a pointer to function b() outside the 
main() function)


This is not true. You can annotate main with @nogc.


Re: Abstract classes vs interfaces, casting from void*

2019-08-10 Thread Timon Gehr via Digitalmars-d-learn

On 10.08.19 16:29, John Colvin wrote:


Ok. What would go wrong (in D) if I just replaced every interface with 
an abstract class?


interface A{}
interface B{}

class C: A,B{ }


Re: Ownership and Borrowing in D

2019-07-23 Thread Timon Gehr via Digitalmars-d-announce

On 23.07.19 10:20, Sebastiaan Koppe wrote:

On Tuesday, 23 July 2019 at 04:02:50 UTC, Timon Gehr wrote:

On 21.07.19 02:17, Walter Bright wrote:

On 7/20/2019 3:39 PM, Sebastiaan Koppe wrote:

Do you mean to keep track of ownership/borrowedness manually?


No, that's what the copyctor/opAssign/dtor semantics so.


This is not true.


I thought as much. Thanks for the confirmation. I am considering moving 
to pointers to benefit from the future semantics. It's just that I don't 
like pointers that much...


I think tying ownership/borrowing semantics to pointers instead of 
structs makes no sense; it's not necessary and it's not sufficient. Your 
use case illustrates why it is not sufficient.


Re: Ownership and Borrowing in D

2019-07-22 Thread Timon Gehr via Digitalmars-d-announce

On 21.07.19 02:17, Walter Bright wrote:

On 7/20/2019 3:39 PM, Sebastiaan Koppe wrote:

Do you mean to keep track of ownership/borrowedness manually?


No, that's what the copyctor/opAssign/dtor semantics so.


This is not true.


Re: accuracy of floating point calculations: d vs cpp

2019-07-22 Thread Timon Gehr via Digitalmars-d-learn

On 22.07.19 14:49, drug wrote:
I have almost identical (I believe it at least) implementation (D and 
C++) of the same algorithm that uses Kalman filtering. These 
implementations though show different results (least significant 
digits). Before I start investigating I would like to ask if this issue 
(different results of floating points calculation for D and C++) is well 
known? May be I can read something about that in web? Does D 
implementation of floating point types is different than the one of C++?


Most of all I'm interesting in equal results to ease comparing outputs 
of both implementations between each other. The accuracy itself is 
enough in my case, but this difference is annoying in some cases.


This is probably not your problem, but it may be good to know anyway: D 
allows compilers to perform arbitrary "enhancement" of floating-point 
precision for parts of the computation, including those performed at 
compile time. I think this is stupid, but I haven't been able to 
convince Walter.


Re: Ownership and Borrowing in D

2019-07-19 Thread Timon Gehr via Digitalmars-d-announce

On 17.07.19 22:59, Walter Bright wrote:


Any competing system would need to not be 'opt-in' on a type by type 
basis. I.e. the central feature of an @live function is the user will 
not be able to write memory unsafe code within that function.


Those two things are not the least bit at odds with each other. You only 
need to do the additional checks for types that actually need it. (And 
there you need to _always_ do them in @safe code, not just in 
specially-annotated "@live" functions.) A dynamic array of integers that 
is owned by the garbage collector doesn't need any O/B semantics. A 
malloc-backed array that wants to borrow out its contents needs to be 
able to restrict @safe access patterns to ensure that the memory stays 
alive for the duration of the borrow.


Furthermore, making built-in types change meaning based on function 
attributes is just not a good idea, because you will get needless 
friction at the interface between functions with the attribute and 
functions without.


Anyway, it makes no sense to have a variable of type `int*` that owns 
the `int` it points to (especially in safe code), because that type 
doesn't track how to deallocate the memory.


Ownership and borrowing should be supported for user-defined types, not 
raw pointers. Rust doesn't track ownership for built-in pointers. In 
Rust, raw pointer access is unsafe.


Re: Strange closure behaviour

2019-06-15 Thread Timon Gehr via Digitalmars-d-learn

On 15.06.19 18:29, Rémy Mouëza wrote:

On Saturday, 15 June 2019 at 01:21:46 UTC, Emmanuelle wrote:

On Saturday, 15 June 2019 at 00:30:43 UTC, Adam D. Ruppe wrote:

On Saturday, 15 June 2019 at 00:24:52 UTC, Emmanuelle wrote:

Is it a compiler bug?


Yup, a very longstanding bug.

You can work around it by wrapping it all in another layer of 
function which you immediately call (which is fairly common in 
javascript):


    funcs ~= ((x) => (int i) { nums[x] ~= i; })(x);

Or maybe less confusingly written long form:

    funcs ~= (delegate(x) {
    return (int i) { nums[x] ~= i; };
    })(x);

You write a function that returns your actual function, and 
immediately calls it with the loop variable, which will explicitly 
make a copy of it.


Oh, I see. Unfortunate that it's a longstanding compiler bug, but at 
least the rather awkward workaround will do. Thank you!


I don't know if we can tell this is a compiler bug.


It's a bug. It's memory corruption. Different objects with overlapping 
lifetimes use the same memory location.



The same behavior happens in Python.


No, it's not the same. Python has no sensible notion of variable scope.

>>> for i in range(3): pass
...
>>> print(i)
2

Yuck.

The logic being variable `x` is captured by the 
closure. That closure's context will contain a pointer/reference to x. 
Whenever x is updated outside of the closure, the context still points 
to the modified x. Hence the seemingly strange behavior.

...


It's not the same instance of the variable. Foreach loop variables are 
local to the loop body. They may both be called `x`, but they are not 
the same. It's most obvious with `immutable` variables.


Adam's workaround ensures that the closure captures a temporary `x` 
variable on the stack: a copy will be made instead of taking a 
reference, since a pointer to `x` would be dangling once the 
`delegate(x){...}` returns.


Most of the time, we want a pointer/reference to the enclosed variables 
in our closures. Note that C++ 17 allows one to select the capture mode: 
the following link lists 8 of them: 
https://en.cppreference.com/w/cpp/language/lambda#Lambda_capture.

...


No, this is not an issue of by value vs by reference. All captures in D 
are by reference, yet the behavior is wrong.


D offers a convenient default that works most of the time. The trade-off 
is having to deal with the creation of several closures referencing a 
variable being modified in a single scope, like the incremented `x` of 
the for loop.

...


By reference capturing may be a convenient default, but even capturing 
by reference the behavior is wrong.






Re: Performance of tables slower than built in?

2019-05-23 Thread Timon Gehr via Digitalmars-d-learn

On 23.05.19 12:21, Alex wrote:

On Wednesday, 22 May 2019 at 00:55:37 UTC, Adam D. Ruppe wrote:

On Wednesday, 22 May 2019 at 00:22:09 UTC, JS wrote:
I am trying to create some fast sin, sinc, and exponential routines 
to speed up some code by using tables... but it seems it's slower 
than the function itself?!?


There's intrinsic cpu instructions for some of those that can do the 
math faster than waiting on memory access.


It is quite likely calculating it is actually faster. Even carefully 
written and optimized tables tend to just have a very small win 
relative to the cpu nowadays.


Surely not? I'm not sure what method is used to calculate them and maybe 
a table method is used internally for the common functions(maybe the 
periodic ones) but memory access surely is faster than multiplying doubles?

...


Depends on what kind of memory access, and what kind of faster. If you 
hit L1 cache then a memory access might be (barely) faster than a single 
double multiplication. (But modern hardware usually can do multiple 
double multiplies in parallel, and presumably also multiple memory 
reads, using SIMD and/or instruction-level parallelism.)


I think a single in-register double multiplication will be roughly 25 
times faster than an access to main memory. Each access to main memory 
will pull an entire cache line from main memory to the cache, so if you 
have good locality (you usually won't with a LUT), your memory accesses 
will be faster on average. There are a lot of other microarchitectural 
details that can matter quite a lot for performance.


And most of the time these functions are computed by some series that 
requires many terms. I'd expect, say, to compute sin one would require 
at least 10 multiplies for any accuracy... and surely that is much 
slower than simply accessing a table(it's true that my code is more 
complex due to the modulos and maybe that is eating up the diff).


Do you have any proof of your claims? Like a paper that discusses such 
things so I can see what's really going on and how they achieve such 
performance(and how accurate)?


Not exactly what you asked, but this might help:
https://www.agner.org/optimize

Also, look up the CORDIC algorithm.


Re: CTFE sort of tuples

2019-05-03 Thread Timon Gehr via Digitalmars-d-learn

On 02.05.19 09:28, Stefan Koch wrote:

On Thursday, 2 May 2019 at 02:54:03 UTC, Andrey wrote:

Hello, I have got this code:


    [...]


I want to sort array of tuples using "data" element in CTFE. But this 
code give me errors:

[...]


As I understand the function "sort" sometimes can't be run at CT 
because of reinterpreting cast.

In this case how to sort?


write a sort?

a bubble-sort should be sufficient for if the arrays are as short as in 
the example.


Well, clearly, we should be able to /swap values/ at compile time.


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-02-01 Thread Timon Gehr via Digitalmars-d-announce

On 01.02.19 10:10, aliak wrote:




Shouldn't doubleMyValue(pt.x) be a compiler error if pt.x is a getter? 
For it not to be a compile error pt.x should also have a setter, in 
which case the code needs to be lowered to something else:


{
   auto __temp = pt.x;
   doubleMyValue(__temp);
   pt.x = __temp;
}

I believe this is something along the lines of what Swift and C# do as 
well.


Or something... a DIP to fix properties anyone? :)


http://wilzbach.github.io/d-dip/DIP24

I'm not sure your rewrite is good though, because it does not preserve 
aliasing during the function call.


Re: Why does nobody seem to think that `null` is a serious problem in D?

2018-12-03 Thread Timon Gehr via Digitalmars-d-learn

On 22.11.18 16:19, Steven Schveighoffer wrote:


In terms of language semantics, I don't know what the right answer is. 
If we want to say that if an optimizer changes program behavior, the 
code must be UB, then this would have to be UB.


But I would prefer saying something like -- if a segfault occurs and the 
program continues, the system is in UB-land, but otherwise, it's fine. 
If this means an optimized program runs and a non-optimized one crashes, 
then that's what it means. I'd be OK with that result. It's like 
Schrodinger's segfault!


I don't know what it means in terms of compiler assumptions, so that's 
where my ignorance will likely get me in trouble :)


This is called nondeterministic semantics, and it is a good idea if you 
want both efficiency and memory safety guarantees, but I don't know how 
well our backends would support it.


(However, I think it is necessary anyway, e.g. to give semantics to pure 
functions.)


Re: shared - i need it to be useful

2018-10-22 Thread Timon Gehr via Digitalmars-d

On 22.10.18 16:09, Simen Kjærås wrote:

On Monday, 22 October 2018 at 13:40:39 UTC, Timon Gehr wrote:

module reborked;
import atomic;

void main()@safe{
    auto a=new Atomic!int;
    import std.concurrency;
    spawn((shared(Atomic!int)* a){ ++*a; }, a);
    ++a.tupleof[0];
}


Finally! Proof that MP is impossible. On the other hand, why the hell is 
that @safe? It breaks all sorts of guarantees about @safety. At a 
minimum, that should be un-@safe.


Filed in bugzilla: https://issues.dlang.org/show_bug.cgi?id=19326

--
   Simen


Even if this is changed (and it probably should be), it does not fix the 
case where the @safe function is in the same module. I don't think it is 
desirable to change the definition of @trusted such that you need to 
check the entire module if it contains a single @trusted function.


If I can break safety of some (previously correct) code by editing only 
@safe code, then that's a significant blow to @safe. I think we need a 
general way to protect data from being manipulated in @safe code in any 
way, same module or not.


Re: shared - i need it to be useful

2018-10-22 Thread Timon Gehr via Digitalmars-d

On 22.10.18 15:26, Simen Kjærås wrote:

Here's the correct version:

module atomic;

void atomicIncrement(int* p) @system {
     import core.atomic;
     atomicOp!("+=",int,int)(*cast(shared(int)*)p,1);
}

struct Atomic(T) {
     // Should probably mark this shared for extra safety,
     // but it's not strictly necessary
     private T val;
     void opUnary(string op : "++")() shared @trusted {
     atomicIncrement(cast(T*));
     }
}
-
module unborked;
import atomic;

void main() @safe {
     auto a = new Atomic!int;
     import std.concurrency;
     spawn((shared(Atomic!int)* a){ ++*a; }, a);
     //++i.val; // Cannot access private member
}

Once more, Joe Average Programmer should not be writing the @trusted 
code in Atomic!T.opUnary - he should be using libraries written by 
people who have studied the exact issues that make multithreading hard.


--
   Simen


module reborked;
import atomic;

void main()@safe{
auto a=new Atomic!int;
import std.concurrency;
spawn((shared(Atomic!int)* a){ ++*a; }, a);
++a.tupleof[0];
}


Re: shared - i need it to be useful

2018-10-22 Thread Timon Gehr via Digitalmars-d

On 22.10.18 03:01, Manu wrote:

On Sun, Oct 21, 2018 at 5:55 PM Timon Gehr via Digitalmars-d
 wrote:


On 22.10.18 02:45, Manu wrote:

On Sun, Oct 21, 2018 at 5:35 PM Timon Gehr via Digitalmars-d
 wrote:


On 21.10.18 20:46, Manu wrote:

Shared data is only useful if, at some point, it is read/written, presumably by
casting it to unshared in @trusted code. As soon as that is done, you've got a
data race with the other existing unshared aliases.

If such a race is possible, then the @trusted function is not
threadsafe, so it is not @trusted by definition.
You wrote a bad @trusted function, and you should feel bad.
...


I wonder where this "each piece of code is maintained by only one person
and furthermore this is the only person that will suffer if the code has
bugs" mentality comes from. It is very popular as well as obviously
nonsense.


The simplest way to guarantee that no unsafe access is possible is to
use encapsulation to assure no unregulated access exists.


This only works if untrusted programmers (i.e. programmers who are only
allowed to write/modify @safe code) are not allowed to change your
class. I.e. it does not work.


Have you ever cracked open std::map and 'fixed' it because you thought
it was bad?


(Also, yes, some people do that because std::map does not provide an 
interface to augment the binary search tree.)



Of course not. Same applies here. Nobody 'fixes' core.atomic.Atomic
without understanding what they're doing.



You are not proposing to let core.atomic.Atomic convert to shared
implicitly, you are proposing to do that for all classes.


You can always implicitly convert to shared.


Yes, exactly what I said.


Where did I ever say anything like that? I'm sure I've never said
this.


???

I said that you are proposing to allow implicit conversions to shared 
for all classes, not only core.atomic.Atomic, and the last time you said 
it was the previous sentence of the same post.



How do these transformations of what I've said keep happening?
...


You literally said that nobody changes core.atomic.Atomic. Anyway, even 
if I bought that @safe somehow should not be checked within druntime (I 
don't), bringing up this example does not make for a coherent argument 
why implicit conversion to shared should be allowed for all classes.



You seem to be stuck on the detail whether you can trust the @trusted
author though...


Again: the @safe author is the problem.


I don't follow. The @safe author is incapable of doing threadsafety
violation.


They are capable of doing so as soon as you provide them a @trusted 
function that treats data as shared that they can access as unshared.



They can only combine threadsafe functions.
They can certainly produce a program that doesn't work, and they are
capable of ordering issues, but that's not the same as data-race
related crash bugs.



Accessing private members of aggregates in the same module is @safe. 
tupleof is @safe too.


Re: shared - i need it to be useful

2018-10-22 Thread Timon Gehr via Digitalmars-d

On 22.10.18 14:39, Aliak wrote:

On Monday, 22 October 2018 at 10:26:14 UTC, Timon Gehr wrote:

---
module borked;

void atomicIncrement(int* p)@system{
    import core.atomic;
    atomicOp!("+=",int,int)(*cast(shared(int)*)p,1);
}

struct Atomic(T){
    private T val;
    void opUnary(string op : "++")() shared @trusted {
    atomicIncrement(cast(T*));
    }
}
void main()@safe{
    auto a=new Atomic!int;
    import std.concurrency;
    spawn((shared(Atomic!int)* a){ ++*a; }, a);
    ++a.val; // race
}
---


Oh no! The author of the @trusted function (i.e. you) did not deliver 
on the promise they made!


hi, if you change the private val in Atomic to be “private shared T 
val”, is the situation the same?


It's a bit different, because then there is no implicit unshared->shared 
conversion happening, and this discussion is only about that. However, 
without further restrictions, you can probably construct cases where a 
@safe function in one module escapes a private shared(T)* member to 
somewhere else that expects a different synchronization strategy.


Therefore, even if we agree that unshared->shared conversion cannot be 
implicit in @safe code, the 'shared' design is not complete, but it 
would be a good first step to agree that this cannot happen, such that 
we can then move on to harder issues.


E.g. probably it would be good to have something like @trusted data that 
cannot be manipulated from @safe code, such that @trusted functions can 
rely on some invariants.


Re: shared - i need it to be useful

2018-10-22 Thread Timon Gehr via Digitalmars-d

On 22.10.18 12:26, Timon Gehr wrote:

---
module borked;

void atomicIncrement(int* p)@system{
     import core.atomic;
     atomicOp!("+=",int,int)(*cast(shared(int)*)p,1);
}

struct Atomic(T){
     private T val;
     void opUnary(string op : "++")() shared @trusted {
     atomicIncrement(cast(T*));
     }
}
void main()@safe{
     Atomic!int i;
     auto a=&[i][0];// was: Atomic!int* a = 
     import std.concurrency;
     spawn((shared(Atomic!int)* a){ ++*a; }, a);
     ++i.val; // race
}
---


Obviously, this should have been:

---
module borked;

void atomicIncrement(int*p)@system{
import core.atomic;
atomicOp!"+="(*cast(shared(int)*)p,1);
}
struct Atomic(T){
private T val;
void opUnary(string op:"++")()shared @trusted{
atomicIncrement(cast(T*));
}
}
void main()@safe{
auto a=new Atomic!int;
import std.concurrency;
spawn((shared(Atomic!int)* a){ ++*a; }, a);
++a.val; // race
}
---

(I was short on time and had to fix Manu's code because it was not 
actually compilable.)


Re: shared - i need it to be useful

2018-10-22 Thread Timon Gehr via Digitalmars-d

On 22.10.18 02:54, Manu wrote:

On Sun, Oct 21, 2018 at 5:40 PM Timon Gehr via Digitalmars-d
 wrote:


On 21.10.18 21:04, Manu wrote:

On Sun, Oct 21, 2018 at 12:00 PM Timon Gehr via Digitalmars-d
 wrote:


On 21.10.18 17:54, Nicholas Wilson wrote:



As soon as that is done, you've got a data race with the other
existing unshared aliases.


You're in @trusted code, that is the whole point. The onus is on the
programmer to make that correct, same with regular @safe/@trusted@system
code.


Not all of the parties that participate in the data race are in @trusted
code. The point of @trusted is modularity: you manually check @trusted
code according to some set of restrictions and then you are sure that
there is no memory corruption.

Note that you are not allowed to look at any of the @safe code while
checking your @trusted code. You will only see an opaque interface to
the @safe code that you call and all you know is that all the @safe code
type checks according to @safe rules. Note that there might be an
arbitrary number of @safe functions and methods that you do not see.

Think about it this way: you first write all the @trusted and @system
code, and some evil guy who does not like you comes in after you looks
at your code and writes all the @safe code. If there is any memory
corruption, it will be your fault and you will face harsh consequences.
Now, design the @safe type checking rules. It won't be MP!

Note that there may well be a good way to get the good properties of MP
without breaking the type system, but MP itself is not good because it
breaks @safe.


Show me. Nobody has been able to show that yet. I'd really like to know this.



I just did,


There's no code there... just a presumption that the person who wrote
the @trusted code did not deliver the promise they made.
...


Yes, because there is no way to write @trusted code that holds its 
promise while actually doing something interesting in multiple threads 
if @safe code can implicitly convert from unshared to shared.



but if you really need to, give me a non-trivial piece of> correct 
multithreaded code that accesses some declared-unshared field
from a shared method and I will show you how the evil guy would modify
some @safe code in it and introduce race conditions. It needs to be your
code, as otherwise you will just claim again that it is me who wrote bad
@trusted code.


You can pick on any of my prior code fragments. They've all been ignored so far.



I don't want "code fragments". Show me the real code.

I manually browsed through posts now (thanks a lot) and found this 
implementation:


struct Atomic(T){
  void opUnary(string op : "++")() shared { atomicIncrement(); }
  private T val;
}

This is @system code. There is no @safe or @trusted here, so I am 
ignoring it.



Then I browsed some more, because I had nothing better to do, and I 
found this. I completed it so that it is actually compilable, except for 
the unsafe implicit conversion.


Please read this code, and then carefully read the comments below it 
before you respond. I will totally ignore any of your answers that 
arrive in the next two hours.


---
module borked;

void atomicIncrement(int* p)@system{
import core.atomic;
atomicOp!("+=",int,int)(*cast(shared(int)*)p,1);
}

struct Atomic(T){
private T val;
void opUnary(string op : "++")() shared @trusted {
atomicIncrement(cast(T*));
}
}
void main()@safe{
Atomic!int i;
auto a=&[i][0];// was: Atomic!int* a = 
import std.concurrency;
spawn((shared(Atomic!int)* a){ ++*a; }, a);
++i.val; // race
}
---


Oh no! The author of the @trusted function (i.e. you) did not deliver on 
the promise they made!


Now, before you go and tell me that I am stupid because I wrote bad 
code, consider the following:


- It is perfectly @safe to access private members from the same module.

- You may not blame the my @safe main function for the problem. It is 
@safe, so it cannot be blamed for UB. Any UB is the result of a bad 
@trusted function, a compiler bug, or hardware failure.


- The only @trusted function in this module was written by you.

You said that there is a third implementation somewhere. If that one 
actually works, I apologize and ask you to please paste it again in this 
subthread.




Re: shared - i need it to be useful

2018-10-21 Thread Timon Gehr via Digitalmars-d

On 22.10.18 02:46, Nicholas Wilson wrote:

On Monday, 22 October 2018 at 00:38:33 UTC, Timon Gehr wrote:

I just did,


Link please?



https://forum.dlang.org/post/pqii8k$11u3$1...@digitalmars.com


Re: shared - i need it to be useful

2018-10-21 Thread Timon Gehr via Digitalmars-d

On 22.10.18 02:45, Manu wrote:

On Sun, Oct 21, 2018 at 5:35 PM Timon Gehr via Digitalmars-d
 wrote:


On 21.10.18 20:46, Manu wrote:

Shared data is only useful if, at some point, it is read/written, presumably by
casting it to unshared in @trusted code. As soon as that is done, you've got a
data race with the other existing unshared aliases.

If such a race is possible, then the @trusted function is not
threadsafe, so it is not @trusted by definition.
You wrote a bad @trusted function, and you should feel bad.
...


I wonder where this "each piece of code is maintained by only one person
and furthermore this is the only person that will suffer if the code has
bugs" mentality comes from. It is very popular as well as obviously
nonsense.


The simplest way to guarantee that no unsafe access is possible is to
use encapsulation to assure no unregulated access exists.


This only works if untrusted programmers (i.e. programmers who are only
allowed to write/modify @safe code) are not allowed to change your
class. I.e. it does not work.


Have you ever cracked open std::map and 'fixed' it because you thought
it was bad?
Of course not. Same applies here. Nobody 'fixes' core.atomic.Atomic
without understanding what they're doing.



You are not proposing to let core.atomic.Atomic convert to shared 
implicitly, you are proposing to do that for all classes.



You seem to be stuck on the detail whether you can trust the @trusted
author though...


Again: the @safe author is the problem.



Re: shared - i need it to be useful

2018-10-21 Thread Timon Gehr via Digitalmars-d

On 21.10.18 21:04, Manu wrote:

On Sun, Oct 21, 2018 at 12:00 PM Timon Gehr via Digitalmars-d
 wrote:


On 21.10.18 17:54, Nicholas Wilson wrote:



As soon as that is done, you've got a data race with the other
existing unshared aliases.


You're in @trusted code, that is the whole point. The onus is on the
programmer to make that correct, same with regular @safe/@trusted@system
code.


Not all of the parties that participate in the data race are in @trusted
code. The point of @trusted is modularity: you manually check @trusted
code according to some set of restrictions and then you are sure that
there is no memory corruption.

Note that you are not allowed to look at any of the @safe code while
checking your @trusted code. You will only see an opaque interface to
the @safe code that you call and all you know is that all the @safe code
type checks according to @safe rules. Note that there might be an
arbitrary number of @safe functions and methods that you do not see.

Think about it this way: you first write all the @trusted and @system
code, and some evil guy who does not like you comes in after you looks
at your code and writes all the @safe code. If there is any memory
corruption, it will be your fault and you will face harsh consequences.
Now, design the @safe type checking rules. It won't be MP!

Note that there may well be a good way to get the good properties of MP
without breaking the type system, but MP itself is not good because it
breaks @safe.


Show me. Nobody has been able to show that yet. I'd really like to know this.



I just did, but if you really need to, give me a non-trivial piece of 
correct multithreaded code that accesses some declared-unshared field 
from a shared method and I will show you how the evil guy would modify 
some @safe code in it and introduce race conditions. It needs to be your 
code, as otherwise you will just claim again that it is me who wrote bad 
@trusted code.


Re: shared - i need it to be useful

2018-10-21 Thread Timon Gehr via Digitalmars-d

On 21.10.18 20:46, Manu wrote:

Shared data is only useful if, at some point, it is read/written, presumably by
casting it to unshared in @trusted code. As soon as that is done, you've got a
data race with the other existing unshared aliases.

If such a race is possible, then the @trusted function is not
threadsafe, so it is not @trusted by definition.
You wrote a bad @trusted function, and you should feel bad.
...


I wonder where this "each piece of code is maintained by only one person 
and furthermore this is the only person that will suffer if the code has 
bugs" mentality comes from. It is very popular as well as obviously 
nonsense.



The simplest way to guarantee that no unsafe access is possible is to
use encapsulation to assure no unregulated access exists.


This only works if untrusted programmers (i.e. programmers who are only 
allowed to write/modify @safe code) are not allowed to change your 
class. I.e. it does not work.


Re: shared - i need it to be useful

2018-10-21 Thread Timon Gehr via Digitalmars-d

On 21.10.18 17:54, Nicholas Wilson wrote:


As soon as that is done, you've got a data race with the other 
existing unshared aliases.


You're in @trusted code, that is the whole point. The onus is on the 
programmer to make that correct, same with regular @safe/@trusted@system 
code.


Not all of the parties that participate in the data race are in @trusted 
code. The point of @trusted is modularity: you manually check @trusted 
code according to some set of restrictions and then you are sure that 
there is no memory corruption.


Note that you are not allowed to look at any of the @safe code while 
checking your @trusted code. You will only see an opaque interface to 
the @safe code that you call and all you know is that all the @safe code 
type checks according to @safe rules. Note that there might be an 
arbitrary number of @safe functions and methods that you do not see.


Think about it this way: you first write all the @trusted and @system 
code, and some evil guy who does not like you comes in after you looks 
at your code and writes all the @safe code. If there is any memory 
corruption, it will be your fault and you will face harsh consequences. 
Now, design the @safe type checking rules. It won't be MP!


Note that there may well be a good way to get the good properties of MP 
without breaking the type system, but MP itself is not good because it 
breaks @safe.


Re: shared - i need it to be useful

2018-10-18 Thread Timon Gehr via Digitalmars-d

On 19.10.18 02:29, Stanislav Blinov wrote:

On Thursday, 18 October 2018 at 23:47:56 UTC, Timon Gehr wrote:

I'm pretty sure you will have to allow operations on shared local 
variables. Otherwise, how are you ever going to use a shared(C)? You 
can't even call a shared method on it because it involves reading the 
reference.


Because you can't really "share" C (e.g. by value). You share a C*, or, 
rather a shared(C)*.


(Here, I intended C to be a class, if that was unclear.)

The pointer itself, which you own, isn't shared at 
all, and shouldn't be: it's your own reference to shared data. You can 
read and write that pointer all you want. What you must not be able to 
do is read and write *c.

...


Presumably you could have a local variable shared(C) c, then take its 
address  and send it to a thread which will be terminated before the 
scope of the local variable ends.


So, basically, the lack of tail-shared is an issue.


Re: shared - i need it to be useful

2018-10-18 Thread Timon Gehr via Digitalmars-d

On 18.10.18 23:34, Erik van Velzen wrote:
If you have an object which can be used in both a thread-safe and a 
thread-unsafe way that's a bug or code smell.


Then why do you not just make all members shared? Because with Manu's 
proposal, as soon as you have a shared method, all members effectively 
become shared. It just seems pointless to type them as unshared anyway 
and then rely on convention within @safe code to prevent unsafe 
accesses. Because, why? It just makes no sense.


With the proposal I posted in the beginning, you would then not only get 
implicit conversion of class references to shared, but also back to 
unshared.


I think the conflation of shared member functions and thread safe member 
functions is confusing. shared on a member function just means that the 
`this` reference is shared. The only use case for this is overloading on 
shared. The D approach to multithreading is that /all/ functions should 
be thread safe, but it is easier for some of them because they don't 
even need to access any shared state. It is therefore helpful if the 
type system cleanly separates shared from unshared state.


Re: shared - i need it to be useful

2018-10-18 Thread Timon Gehr via Digitalmars-d

On 18.10.18 20:26, Steven Schveighoffer wrote:


i = 1;
int x = i;
shared int y = i;


This should be fine, y is not shared when being created.

However, this still is allowed, and shouldn't be:

y = 5;

-Steve


I'm pretty sure you will have to allow operations on shared local 
variables. Otherwise, how are you ever going to use a shared(C)? You 
can't even call a shared method on it because it involves reading the 
reference.


Re: shared - i need it to be useful

2018-10-17 Thread Timon Gehr via Digitalmars-d

On 17.10.2018 16:14, Nicholas Wilson wrote:


I was thinking that mutable -> shared const as apposed to mutable -> 
shared would get around the issues that Timon posted.


Unfortunately not. For example, the thread with the mutable reference is 
not obliged to actually make the changes that are performed on that 
reference visible to other threads.


Re: shared - i need it to be useful

2018-10-17 Thread Timon Gehr via Digitalmars-d

On 17.10.2018 15:40, Steven Schveighoffer wrote:

On 10/17/18 8:02 AM, Timon Gehr wrote:
Now, if a class has only shared members, that is another story. In 
this case, all references should implicitly convert to shared. There's 
a DIP I meant to write about this. (For all qualifiers, not just shared).


When you say "shared members", you mean all the data is shared too or 
just the methods are shared?


If not the data, D has a problem with encapsulation. Not only all the 
methods on the class must be shared, but ALL code in the entire module 
must be marked as using a shared class instance. Otherwise, other 
functions could modify the private data without using the proper synch 
mechanisms.


We are better off requiring the cast, or enforcing that one must use a 
shared object to begin with.


I think any sometimes-shared object is in any case going to benefit from 
parallel implementations for when the thing is unshared.


-Steve


The specific proposal was that, for example, if a class is defined like 
this:


shared class C{
// ...
}

then shared(C) and C are implicitly convertible to each other. The 
change is not fully backwards-compatible, because right now, this 
annotation just makes all members (data and methods) shared, but child 
classes may introduce unshared members.


Re: shared - i need it to be useful

2018-10-17 Thread Timon Gehr via Digitalmars-d

On 17.10.2018 14:24, Timon Gehr wrote:

and unshared methods are only allowed to access unshared members.


This is actually not necessary, let me reformulate:

You want:

- if you have a C c and a shared(C) s, typeof(s.x) == typeof(c.x).
- shared methods are not allowed to access unshared members.
- shared is not transitive, and therefore unshared class references 
implicitly convert to shared class references


Applied to pointers, this would mean that you can implicitly convert 
int* -> shared(int*), but not shared(int*)->int*, int* -> shared(int)* 
or shared(int)* -> int*. shared(int*) and shared(shared(int)*) would be 
different types, such that shared(int*) cannot be dereferenced but 
shared(shared(int)*) can.


Re: shared - i need it to be useful

2018-10-17 Thread Timon Gehr via Digitalmars-d

On 17.10.2018 14:29, Timon Gehr wrote:

to access c.m iff m is not shared


Unfortunate typo. This should be if, not iff (if and only if).


Re: shared - i need it to be useful

2018-10-17 Thread Timon Gehr via Digitalmars-d

On 17.10.2018 09:20, Manu wrote:

Timon Gehr has done a good job showing that they still stand
unbreached.

His last comment was applied to a different proposal.
His only comment on this thread wasn't in response to the proposal in
this thread.
If you nominate Timon as your proxy, then he needs to destroy my
proposal, or at least comment on it, rather than make some prejudiced
comment generally.



There is no "prejudice", just reasoning. Your proposal was "disallow 
member access on shared aggregates, allow implicit conversion from 
unshared to shared and keep everything else the same". This is a bad 
proposal. There may be a good proposal that allows the things you want, 
but you have not stated what they are, your OP was just: "look at this 
bad proposal, it might work, no?" I said no, then was met with some 
hostility.


You should focus on finding a good proposal that achieves what you want 
without breaking the type system instead of attacking me.


Re: shared - i need it to be useful

2018-10-17 Thread Timon Gehr via Digitalmars-d

On 17.10.2018 14:24, Timon Gehr wrote:

On 15.10.2018 23:51, Manu wrote:

If a shared method is incompatible with an unshared method, your class
is broken.


Then what you want is not implicit unshared->shared conversion. What you 
want is a different way to type shared member access. You want a setup 
where shared methods are only allowed to access shared members and 
unshared methods are only allowed to access unshared members.


I.e., what you want is that shared is not transitive. You want that if 
you have a shared(C) c, then it is an error to access c.m iff m is not 
shared. This way you can have partially shared classes, where part of 
the class is thread-local, and other parts are shared with other threads.


Is this it?


(Also, with this new definition of 'shared', unshared -> shared 
conversion would of course become sound.)


Re: shared - i need it to be useful

2018-10-17 Thread Timon Gehr via Digitalmars-d

On 15.10.2018 23:51, Manu wrote:

If a shared method is incompatible with an unshared method, your class
is broken.


Then what you want is not implicit unshared->shared conversion. What you 
want is a different way to type shared member access. You want a setup 
where shared methods are only allowed to access shared members and 
unshared methods are only allowed to access unshared members.


I.e., what you want is that shared is not transitive. You want that if 
you have a shared(C) c, then it is an error to access c.m iff m is not 
shared. This way you can have partially shared classes, where part of 
the class is thread-local, and other parts are shared with other threads.


Is this it?


Re: shared - i need it to be useful

2018-10-17 Thread Timon Gehr via Digitalmars-d

On 16.10.2018 19:25, Manu wrote:

On Tue, Oct 16, 2018 at 3:20 AM Timon Gehr via Digitalmars-d
 wrote:


On 15.10.2018 20:46, Manu wrote:


Assuming the rules above: "can't read or write to members", and the
understanding that `shared` methods are expected to have threadsafe
implementations (because that's the whole point), what are the risks
from allowing T* -> shared(T)* conversion?



Unshared becomes useless, and in turn, shared becomes useless. You can't
have unshared/shared aliasing.


What aliasing?


Aliasing means you have two references to the same data. The two 
references are then said to alias. An implicit conversion from unshared 
to shared by definition introduces aliasing, where one of the two 
references is unshared and the other is shared.



Please show a reasonable and likely construction of the
problem. I've been trying to think of it.
...


I have given you an example. I don't care whether it is "reasonable" or 
"likely". The point of @safe is to have a subset of the language with a 
sound type system.



All the risks that I think have been identified previously assume that
you can arbitrarily modify the data. That's insanity... assume we fix
that... I think the promotion actually becomes safe now...?


But useless, because there is no way to ensure thread safety of reads
and writes if only one party to the shared state knows about the sharing.


What? I don't understand this sentence.

If a shared method is not threadsafe, then it's an implementation
error. A user should expect that a shared method is threadsafe,
otherwise it shouldn't be a shared method! Thread-local (ie, normal)
methods are for not-threadsafe functionality.



Your function can be thread safe all you want. If you have a thread 
unsafe function also operating on the same state, you will still get 
race conditions. E.g. mutual exclusion is based on cooperation from all 
threads. If one of them forgets to lock, it does not matter how 
threadsafe all the others are.


Re: shared - i need it to be useful

2018-10-17 Thread Timon Gehr via Digitalmars-d

On 16.10.2018 20:07, Manu wrote:

On Tue, Oct 16, 2018 at 6:25 AM Timon Gehr via Digitalmars-d
 wrote:


On 16.10.2018 13:04, Dominikus Dittes Scherkl wrote:

On Tuesday, 16 October 2018 at 10:15:51 UTC, Timon Gehr wrote:

On 15.10.2018 20:46, Manu wrote:


Assuming the rules above: "can't read or write to members", and the
understanding that `shared` methods are expected to have threadsafe
implementations (because that's the whole point), what are the risks
from allowing T* -> shared(T)* conversion?



Unshared becomes useless, and in turn, shared becomes useless.

why is unshared useless?
Unshared means you can read an write to it.
If you give it to a function that expect something shared,
the function you had given it to can't read or write it, so it
can't do any harm.


It can do harm to others who hold an unshared alias to the same data and
are operating on it concurrently.


Nobody else holds an unshared alias.


How so? If you allow implicit conversions from unshared to shared, then 
you immediately get this situation.



If you pass a value as const, you don't fear that it will become mutable.
...


No, but as I already explained last time, mutable -> const is not at all 
like unshared -> shared.


const only takes away capabilities, shared adds new capabilities, such 
as sending a reference to another thread. If you have two threads that 
share data, you need cooperation from both to properly synchronize accesses.



Of course it can handle it threadsave, but as it is local,
that is only overhead - reading or changing the value can't do
any harm either. I like the idea.


But useless, because there is no way to ensure thread safety of reads
and writes if only one party to the shared state knows about the sharing.

Of course there is.


Please do enlighten me. You have two processors operating
(reading/writing) on the same address space on a modern computer
architecture with a weak memory model, and you are using an optimizing
compiler. How do you ensure sensible results without cooperation from
both of them? (Hint: you don't.)


What? This is a weird statement.
So, you're saying that nobody has successfully written any threadsafe
code, ever... we should stop trying, and we should admit that
threadsafe queues and atomics, and mutexes and stuff all don't exist?



Obviously I am not saying that.


without cooperation from both of them?


Perhaps this is the key to your statement?


Yes.


Yes. 'cooperation from both of them' in this case means, they are both
interacting with a threadsafe api, and they are blocked from accessing
members, or any non-threadsafe api.
...


Yes. Your proposal only enforces this for the shared alias.


Giving an unshared value to a function that
even can handle shared values may create some overhead, but is
indeed threadsave.



Yes, if you give it to one function only, that is the case. However, as
you may know, concurrency means that there may be multiple functions
operating on the data _at the same time_. If one of them operates on the
data as if it was not shared, you will run into trouble.


Who's doing this,


Anyone, it really does not matter. One major point of the type system is 
to ensure that _all_ @safe code has defined behavior. You can convert 
between shared and unshared, just not in @safe code.



and how?
...


They create a mutable instance of a class, they create a shared alias 
using one of your proposed holes, then send the shared alias to another 
thread, call some methods on it in both threads and get race conditions.



You are arguing as if there was either no concurrency or no mutable
aliasing.


If a class has no shared methods, there's no possibility for mutable aliasing.
If the class has shared methods, then the class was carefully designed
to be threadsafe.



Not necessarily. Counterexample:

@safe:
class C{
int x;
void foo(){
x+=1; // this can still race with atomicIncrement
}
void bar()shared{
atomicIncrement(x); // presumably you want to allow this
}
}

void main(){
auto c=new C();
shared s=c; // depending on your exact proposed rules, this step 
may be more cumbersome

spawn!(()=>s.bar());
s.foo(); // race
}

Now, if a class has only shared members, that is another story. In this 
case, all references should implicitly convert to shared. There's a DIP 
I meant to write about this. (For all qualifiers, not just shared).


Re: shared - i need it to be useful

2018-10-16 Thread Timon Gehr via Digitalmars-d

On 16.10.2018 13:04, Dominikus Dittes Scherkl wrote:

On Tuesday, 16 October 2018 at 10:15:51 UTC, Timon Gehr wrote:

On 15.10.2018 20:46, Manu wrote:


Assuming the rules above: "can't read or write to members", and the
understanding that `shared` methods are expected to have threadsafe
implementations (because that's the whole point), what are the risks
from allowing T* -> shared(T)* conversion?



Unshared becomes useless, and in turn, shared becomes useless.

why is unshared useless?
Unshared means you can read an write to it.
If you give it to a function that expect something shared,
the function you had given it to can't read or write it, so it
can't do any harm.


It can do harm to others who hold an unshared alias to the same data and 
are operating on it concurrently.



Of course it can handle it threadsave, but as it is local,
that is only overhead - reading or changing the value can't do
any harm either. I like the idea.

But useless, because there is no way to ensure thread safety of reads 
and writes if only one party to the shared state knows about the sharing.

Of course there is.


Please do enlighten me. You have two processors operating 
(reading/writing) on the same address space on a modern computer 
architecture with a weak memory model, and you are using an optimizing 
compiler. How do you ensure sensible results without cooperation from 
both of them? (Hint: you don't.)



Giving an unshared value to a function that
even can handle shared values may create some overhead, but is
indeed threadsave.



Yes, if you give it to one function only, that is the case. However, as 
you may know, concurrency means that there may be multiple functions 
operating on the data _at the same time_. If one of them operates on the 
data as if it was not shared, you will run into trouble.


You are arguing as if there was either no concurrency or no mutable 
aliasing.


Re: shared - i need it to be useful

2018-10-16 Thread Timon Gehr via Digitalmars-d

On 15.10.2018 20:46, Manu wrote:


Assuming the rules above: "can't read or write to members", and the
understanding that `shared` methods are expected to have threadsafe
implementations (because that's the whole point), what are the risks
from allowing T* -> shared(T)* conversion?



Unshared becomes useless, and in turn, shared becomes useless. You can't 
have unshared/shared aliasing.



All the risks that I think have been identified previously assume that
you can arbitrarily modify the data. That's insanity... assume we fix
that... I think the promotion actually becomes safe now...?


But useless, because there is no way to ensure thread safety of reads 
and writes if only one party to the shared state knows about the sharing.


Re: `shared`...

2018-10-01 Thread Timon Gehr via Digitalmars-d

On 02.10.2018 01:09, Manu wrote:

Your entire example depends on escaping references. I think you missed
the point?


There was no 'scope' in the OP, and no, that is not sufficient either, 
because scope is not transitive but shared is.


Re: `shared`...

2018-10-01 Thread Timon Gehr via Digitalmars-d

On 01.10.2018 04:29, Manu wrote:

struct Bob
{
   void setThing() shared;
}

As I understand, `shared` attribution intends to guarantee that I dun
synchronisation internally.
This method is declared shared, so if I have shared instances, I can
call it... because it must handle thread-safety internally.

void f(ref shared Bob a, ref Bob b)
{
   a.setThing(); // I have a shared object, can call shared method

   b.setThing(); // ERROR
}

This is the bit of the design that doesn't make sense to me...
The method is shared, which suggests that it must handle
thread-safety. My instance `b` is NOT shared, that is, it is
thread-local.
So, I know that there's not a bunch of threads banging on this
object... but the shared method should still work! A method that
handles thread-safety doesn't suddenly not work when it's only
accessed from a single thread.
...


shared on a method does not mean "this function handles thread-safety". 
It means "the `this` pointer of this function is not guaranteed to be 
thread-local". You can't implicitly create an alias of a reference that 
is supposed to be thread-local such that the resulting reference can be 
freely shared among threads.



I feel like I don't understand the design...
mutable -> shared should work the same as mutable -> const... because
surely that's safe?


No. The main point of shared (and the main thing you need to understand) 
is that it guarantees that if something is _not_ `shared` is is not 
shared among threads. Your analogy is not correct, going from 
thread-local to shared is like going from mutable to immutable.


If the suggested typing rule was implemented, we would have the 
following way to break the type system, allowing arbitrary aliasing 
between mutable and shared references, completely defeating `shared`:


class C{ /*...*/ }

shared(C) sharedGlobal;
struct Bob{
C unshared;
void setThing() shared{
sharedGlobal=unshared;
}
}

void main(){
C c = new C(); // unshared!
Bob(c).setThing();
shared(D) d = sharedGlobal; // shared!
assert(c !is d); // would fail (currently does not even compile)
// sendToOtherThread(d);
// c.someMethod(); // (potential) race condition on unshared data
}


Re: Calling nested function before declaration

2018-09-27 Thread Timon Gehr via Digitalmars-d

On 27.09.2018 00:46, Jonathan wrote:



I can't see how the current behavior is at all better or to be preferred 
unless it is faster to compile?  What is the reason for it being how it is?


The current behavior is easy to specify and simple to implement, and it 
is what Walter has implemented. A better behavior that is almost as 
simple to implement would be to insert nested functions into the symbol 
table in blocks of back-to-back-defined nested functions, but that would 
be a breaking language change at this point. (Maybe you can get a DIP 
though though.) Otherwise, the template workaround is a bit ugly but it 
works and is non-intrusive.


Re: Calling nested function before declaration

2018-09-27 Thread Timon Gehr via Digitalmars-d

On 27.09.2018 01:05, Neia Neutuladh wrote:


The standard ways of dealing with this:

* Reorder the declarations.
* Make the functions non-nested.
* Get rid of mutual recursion.
* Use a delegate.
* Do a method-to-method-object refactoring.


* turn the function with the forward reference into a template

void main() {
void fun()() {
fun2();
}
void fun2() {}
fun(); // ok
}


Re: Tuple DIP

2018-09-19 Thread Timon Gehr via Digitalmars-d

On 19.09.2018 23:14, 12345swordy wrote:

On Tuesday, 3 July 2018 at 16:11:05 UTC, 12345swordy wrote:

On Thursday, 28 June 2018 at 13:24:11 UTC, Timon Gehr wrote:

On 26.06.2018 11:55, Francesco Mecca wrote:

On Friday, 12 January 2018 at 22:44:48 UTC, Timon Gehr wrote:
As promised [1], I have started setting up a DIP to improve tuple 
ergonomics in D:


[...]


What is the status of the DIP? Is it ready to be proposed and dicussed?


I still need to incorporate all the feedback from this thread. Also, 
I have started an implementation, and ideally I'd like to have it 
finished by the time the DIP is discussed. Unfortunately I am rather 
busy with work at the moment.


Is there any way we can help on this?


*Bump* I want this.


So do I, but I need to get a quiet weekend or so to finish this.

I am very tempted to start my own dip on this and 
finish it.


Here's the current state of my implementation in DMD:
https://github.com/dlang/dmd/compare/master...tgehr:tuple-syntax

It has no tests yet, but basically, with those changes, you can write 
tuple literals `(1, 2.0, "3")`, you can unpack tuples using `auto (a, b) 
= t;` or `(int a, string b) = t;`, and tuples can be expanded using 
alias this on function calls, so you can now write things like 
`zip([1,2,3],[4,5,6]).map!((a,b)=>a+b)`.


The implementation is still missing built-in syntax for tuple types, 
tuple assignments, and tuple unpacking within function argument lists 
and foreach loops.


Re: John Regehr on "Use of Assertions"

2018-09-08 Thread Timon Gehr via Digitalmars-d

On 06.09.2018 23:47, Walter Bright wrote:

On 9/5/2018 4:55 PM, Timon Gehr wrote:

John rather explicitly states the opposite in the article.


I believe that his statement:

"it’s not an interpretation that is universally useful"

is much weaker than saying "the opposite". He did not say it was "never 
useful".

...


Wait, what?

From this it would follow that "UB on failing assert is never useful" 
is the opposite of your stance. Therefore you would think that "UB on 
failing assert is _sometimes_ useful". (I don't have any qualm with 
this, but I would note that this will not be very common, and that a per 
compilation unit switch is a too coarse-grained way to select asserts 
you want to use for optimization, and it also affects @safe-ty therefore 
there should just be a @system __assume primitive _instead_.)


However, not allowing to _disable_ asserts instead of turning them into 
UB is only a good idea if "UB on failing assert is always useful". (I 
totally, utterly disagree with this and we have filled pages of 
newsgroup posts where you were championing this claim.)


So, which is it?


For example, it is not universally true that airplanes never crash. But  > it 
is rare enough that we can usefully assume the next one we get on
won't crash.


So if your stance was: "Airplanes don't crash", and John were to come 
and write an article that said:


"There are two ways to think about airplanes:

1. If they crash, you die. This is the best way to think about airplanes.

2. Another popular way to think about airplanes is that they don't 
crash. However, this interpretation is not universally useful. In fact, 
it can be dangerous if adopted by pilots or engineers."


Then you would conclude: "I am very happy that John agrees with me that 
airplanes don't crash." ?


Re: John Regehr on "Use of Assertions"

2018-09-05 Thread Timon Gehr via Digitalmars-d

On 02.09.2018 02:47, Nick Sabalausky (Abscissa) wrote:

On 09/01/2018 08:44 PM, Nick Sabalausky (Abscissa) wrote:


    "Are Assertions Enabled in Production Code?"
    "This is entirely situational."
    "The question of whether it is better to stop or keep going when 
an internal bug is detected is not a straightforward one to answer."



All in all, John is very non-committal about the whole thing.


I think you misunderstood what the original post and Guillaumes 
disappointment was about. Walter claims that John agrees that UB on 
failure is the best default -release behavior for assertions. John 
rather explicitly states the opposite in the article.


Being non-committal about whether assertions should be enabled in 
production or not just means that the language should provide both 
options. D does not. Assertions are always enabled: either they are 
checked or they are used as assumptions.


Re: John Regehr on "Use of Assertions"

2018-09-05 Thread Timon Gehr via Digitalmars-d

On 01.09.2018 22:15, Walter Bright wrote:

https://blog.regehr.org/archives/1091

As usual, John nails it in a particularly well-written essay.

"ASSERT(expr)
Asserts that an expression is true. The expression may or may not be 
evaluated.

If the expression is true, execution continues normally.
If the expression is false, what happens is undefined."

Note the "may or may not be evaluated." We've debated this here before. 
I'm rather pleased that John agrees with me on this.


He does not! John gives two definitions. The first definition is the one 
I want, and he calls it _the best definition_. (I.e., all other 
definitions are inferior.)


I.e. the optimizer 
can assume the expression is true and use that information to generate 
better code, even if the assert code generation is turned off.


The definition you quoted is the /alternative/ definition. He does not 
call it the best definition, and even explains that it can be dangerous. 
He says "it’s not an interpretation that is universally useful". (!)


I don't understand how you can conclude from this that John's view is 
that this should be the default -release behavior of assertions.


Re: Static foreach bug?

2018-09-05 Thread Timon Gehr via Digitalmars-d

On 05.09.2018 12:29, Dechcaudron wrote:

On Tuesday, 4 September 2018 at 19:50:27 UTC, Timon Gehr wrote:

The only blocker is finding a good syntax.


How does "static enum" sound?


It can't be anything that is legal code today (__local works for all 
declarations, not just enums).


Re: Static foreach bug?

2018-09-05 Thread Timon Gehr via Digitalmars-d

On 05.09.2018 14:41, Andre Pany wrote:

On Wednesday, 5 September 2018 at 12:05:59 UTC, rikki cattermole wrote:


Indeed. scope enum would make much more sense.


scope enum sounds a lot better for me than static enum or even __local. 
The __ words looks a little bit like compiler magic as the __ words are 
reserved for the compiler.


Kind regards
Andre


I agree, but it is not an option as scope already has a different 
meaning, and so this would redefine the semantics of existing code.


Re: Dicebot on leaving D: It is anarchy driven development in all its glory.

2018-09-04 Thread Timon Gehr via Digitalmars-d

On 29.08.2018 22:01, Walter Bright wrote:

On 8/29/2018 10:50 AM, Timon Gehr wrote:
D const/immutable is stronger than immutability in Haskell (which is 
usually _lazy_).


I know Haskell is lazy, but don't see the connection with a weaker 
immutability guarantee.


In D, you can't have a lazy value within an immutable data structure 
(__mutable will fix this).



In any case, isn't immutability a precept of FP?


Yes, but it's at a higher level of abstraction. The important property 
of a (lazy) functional programming language is that a language term can 
be deterministically assigned a value for each concrete instance of an 
environment in which it is well-typed (i.e., values for all free 
variables of the term). Furthermore, the language semantics can be given 
as a rewrite system such that each rewrite performed by the system 
preserves the semantics of the rewritten term. I.e., terms change, but 
their values are preserved (immutable). [1]


To get this property, it is crucially important the functional 
programming system does not leak reference identities of the underlying 
value representations. This is sometimes called referential 
transparency. Immutability is a means to this end. (If references allow 
mutation, you can detect reference equality by modifying the underlying 
object through one reference and observing that the data accessed 
through some other reference changes accordingly.)


Under the hood, functional programming systems simulate term rewriting 
in some way, ultimately using mutable data structures. Similarly, in D, 
the garbage collector is allowed to change data that has been previously 
typed as immutable, and it can type-cast data that has been previously 
typed as mutable to immutable. However, it is impossible to write a GC 
or Haskell-like programs in D with pure functions operating on immutable 
data, because of constraints the language puts on user code that 
druntime is not subject to.


Therefore, D immutable/pure are both too strong and too weak: they 
prevent @system code from implementing value representations that 
internally use mutation (therefore D cannot implement its own runtime 
system, or alternatives to it), and it does not prevent pure @safe code 
from leaking reference identities of immutable value representations:


pure @safe naughty(immutable(int[]) xs){
return cast(long)xs.ptr;
}

(In fact, it is equally bad that @safe weakly pure code can depend on 
the address of mutable data.)




[1] E.g.:

(λa b. a + b) 2 3

and

10 `div` 2

are two terms whose semantics are given as the mathematical value 5.

During evaluation, terms change:

(λa b. a + b) 2 3 ⇝ 2 + 3 ⇝ 5
10 `div` 2 ⇝ 5

However, each intermediate term still represents the same value.


Re: Static foreach bug?

2018-09-04 Thread Timon Gehr via Digitalmars-d

On 02.09.2018 15:45, bauss wrote:
On Sunday, 2 September 2018 at 13:26:55 UTC, Petar Kirov [ZombineDev] 
wrote:
It's intended, but with the possibility to add special syntax for 
local declarations in the future left open, as per:
https://github.com/dlang/DIPs/blob/master/DIPs/accepted/DIP1010.md#local-declarations 



Is there any plans to implement it soon


It has been implemented for a long time (which you will find is actually 
clearly stated if you follow the link above). The only blocker is 
finding a good syntax. Currently, it would be:


static foreach(i;0..2){
__local enum x = 2;
}


or is this going to be another half done feature?
The feature is complete. There are just some further features that might 
go well with it.


  1   2   3   4   5   6   7   8   9   10   >