Re: DIP60: @nogc attribute

2014-04-16 Thread Timon Gehr via Digitalmars-d

On 04/16/2014 10:10 PM, Peter Alexander wrote:


However, that raises a second question: since err is allocated when a
new thread is created, does that mean @nogc functions cannot create
threads in the presence of such static initialisation?


This does not allocate on the GC heap.


Re: DIP60: @nogc attribute

2014-04-17 Thread Timon Gehr via Digitalmars-d

On 04/17/2014 02:34 PM, Manu via Digitalmars-d wrote:

On 17 April 2014 21:57, John Colvin via Digitalmars-d
digitalmars-d@puremagic.com mailto:digitalmars-d@puremagic.com wrote:

On Thursday, 17 April 2014 at 11:31:52 UTC, Manu via Digitalmars-d
wrote:

ARC offers a solution that is usable by all parties.
...
You can't use a GC in a
low-memory environment, no matter how it's designed. It allocates until
it can't, then spends a large amount of time re-capturing unreferenced
memory. As free memory decreases, this becomes more and more frequent.


What John was trying to get at is that the two quoted statements above 
are in contradiction with each other. An GC is a subsystem that 
automatically frees dead memory. (Dead as in it will not be accessed 
again, which is a weaker notion than it being unreferenced.)


Maybe the distinction you want to make is between ARC and tracing 
garbage collectors.




Re: Knowledge of managed memory pointers

2014-04-17 Thread Timon Gehr via Digitalmars-d

On 04/17/2014 08:55 AM, Manu via Digitalmars-d wrote:

It occurs to me that a central issue regarding the memory management
debate, and a major limiting factor with respect to options, is the fact
that, currently, it's impossible to tell a raw pointer apart from a gc
pointer.

Is this is a problem worth solving? And would it be as big an enabler to
address some tricky problems as it seems to be at face value?

What are some options? Without turning to fat pointers or convoluted
changes in the type system, are there any clever mechanisms that could
be applied to distinguish managed from unmanaged pointers.


It does not matter if changes to the type system are 'convoluted'. (They 
don't need to be.)



If an API
could be provided in druntime, it may be used by GC's, ARC, allocators,
or systems that operate at the barrier between languages.



There already is.

bool isGCPointer(void* ptr){
import core.memory;
return !!GC.addrOf(ptr);
}

void main(){
import std.c.stdlib;
auto x=cast(int*)malloc(int.sizeof);
auto y=new int;
assert(!x.isGCPointer()  y.isGCPointer());
}



Re: DIP60: @nogc attribute

2014-04-18 Thread Timon Gehr via Digitalmars-d

On 04/18/2014 10:50 AM, bearophile wrote:



Honestly, I think expecting that code to be allowed to use @nogc is a
huge mistake and disagree with editing the DIP to include this solely
because you decided it should.


That Wiki page is editable, so if it's wrong it takes one minute to fix
the text I have written.  What I have decided to include is an explicit
explanation regarding what a correct D compiler should do in that case.


In which case? In case some version of LDC2 is able to avoid the heap 
allocation using full optimizations? :o)


Re: DIP61: Add namespaces to D

2014-04-26 Thread Timon Gehr via Digitalmars-d

On 04/26/2014 11:31 AM, Walter Bright wrote:

http://wiki.dlang.org/DIP61

Best practices in C++ code increasingly means putting functions and
declarations in namespaces. Currently, there is no support in D to call
C++ functions in namespaces. The primary issue is that the name mangling
doesn't match. Need a simple and straightforward method of indicating
namespaces.

There have been many proposals earlier:

   http://forum.dlang.org/post/lhi1lt$269h$1...@digitalmars.com

but it seems to me that the simplest, most straightforward approach
would be better.
...


I agree.


As more and more people are attempting to call C++ libraries from D,
this is getting to be a more and more important issue.


Looks good to me, but I think that the current limited lookup rules for 
template mixins are not really good enough to accommodate for common 
usage patterns of namespaces.


I think the following should both just work:

import std.stdio;

mixin template Foo(T){
T foo(T a){ return a; }
}
mixin Foo!int g;
mixin Foo!string g;

void main(){
writeln(foo(2));
writeln(foo(a));
writeln(g.foo(2));
writeln(g.foo(a));
}

// -

import std.stdio;

namespace g{
int foo(int a){ return a; }
}
namespace g{
string foo(string a){ return a; }
}

void main(){
writeln(foo(2));
writeln(foo(a));
writeln(g.foo(2));
writeln(g.foo(a));
}

Both examples should still work if the two mixins/namespaces occur in 
(possibly different) imported modules. I think this would be in line 
with how lookup is generally handled in D. (Note that I am not 
suggesting to make namespaces extensible, but rather to make them 
overloadable.) How do you think about this?




Re: DIP61: Add namespaces to D

2014-04-26 Thread Timon Gehr via Digitalmars-d

On 04/26/2014 01:57 PM, Dicebot wrote:




I think this is a very bad proposal. Necessity to define namespaces for
interfacing with C++ must not result in usage of namespaces of pure D code.


Well, the proposed feature does not add any new capabilities except 
proper mangling. In pure D code


namespace foo{
// declarations
}

would be basically the same as

private mixin template Foo(){
// declarations
}
mixin Foo foo;

which is available today. I guess namespaces will occur in pure D code 
as sparsely as the above construction, because they are not particularly 
useful.


Re: DIP61: Add namespaces to D

2014-04-26 Thread Timon Gehr via Digitalmars-d

On 04/26/2014 08:13 PM, Walter Bright wrote:


private mixin template Foo(){
 // declarations
}
mixin Foo foo;
... I guess namespaces will occur in pure D code as
sparsely as the above construction, because they are not particularly
useful.


Yeah, template mixins turned out to be a solution looking for a problem.


I was actually referring to the exact pattern above. I.e. a 
parameter-less mixin template that is mixed in immediately exactly one 
time for the named scope alone. :)


Re: DIP61: Add namespaces to D

2014-04-26 Thread Timon Gehr via Digitalmars-d

On 04/26/2014 08:15 PM, Walter Bright wrote:

On 4/26/2014 5:37 AM, Timon Gehr wrote:

import std.stdio;

namespace g{
 int foo(int a){ return a; }
}
namespace g{
 string foo(string a){ return a; }
}

void main(){
 writeln(foo(2));
 writeln(foo(a));
 writeln(g.foo(2));
 writeln(g.foo(a));
}

Both examples should still work if the two mixins/namespaces occur in
(possibly
different) imported modules. I think this would be in line with how
lookup is
generally handled in D. (Note that I am not suggesting to make namespaces
extensible, but rather to make them overloadable.) How do you think
about this?


Yes, that's how I anticipate it working.


Nice.


That's just following existing rules.



The example with the mixins does not actually compile. I have filed this:
https://issues.dlang.org/show_bug.cgi?id=12659


Re: DIP61: Add namespaces to D

2014-04-26 Thread Timon Gehr via Digitalmars-d

On 04/26/2014 09:27 PM, Daniel Murphy wrote:


We already have a feature to manage conflicts and organisation in D code - 
modules!


Named mixin templates are a much closer fit.


There is no need to add namespaces to do that, and if that's really what you 
want
it belongs in a completely different discussion.
...


Why? Every name resolution rule added is already in D.


The thing we can't (easily) do is mangle C++ namespaces, so a feature
that only affects mangling is perfect.

i.e. We don't need namespaces in D, because modules cover that.



We only need namespace mangling.


Which is all the DIP adds. I do not really understand the objections.


Re: DIP61: Add namespaces to D

2014-04-26 Thread Timon Gehr via Digitalmars-d

On 04/26/2014 10:32 PM, Daniel N wrote:


The disadvantage of this is that it forces one .di file per namespace,
whereas in C++ people frequently use different namespaces within the
same file.


Andrei


I would argue that this restriction is a benefit not a disadvantage,


Why on earth would precluding an organisation of the binding in the file 
system mirroring the C++ side be a benefit?


Re: DIP61: Add namespaces to D

2014-04-26 Thread Timon Gehr via Digitalmars-d

On 04/26/2014 09:56 PM, Andrei Alexandrescu wrote:


I think this is not a good proposal from a proportional response
standpoint: it squanders a keyword for a minor feature.

I also think the preexisting suggestions are each wanting in various ways.

That's why we should guide the discussion not in the direction of
ranking existing proposals, but instead to acknowledge we have a
collection of bad proposals on the table and we need to define a better
one.


I.e. your only objection to this is its syntax?


Re: DIP61: Add namespaces to D

2014-04-26 Thread Timon Gehr via Digitalmars-d

On 04/27/2014 12:03 AM, David Nadlinger wrote:

On Saturday, 26 April 2014 at 21:57:55 UTC, Timon Gehr wrote:

Which is all the DIP adds. I do not really understand the objections.


It adds a new language feature, which is not just used only in a rather
specific situation, but also very likely to be confused with the
eponymous feature from other languages. Just add the C++ mangling
functionality to mixin templates using a pragma/attribute and be done,
no need to add a whole new language primitive.

David


I.e.

mixin template SpareIdentifier(){
// ...
}

extern(C++) pragma(namespace) mixin SpareIdentifier foo;



Re: DIP61: Add namespaces to D

2014-04-26 Thread Timon Gehr via Digitalmars-d

On 04/27/2014 12:43 AM, Dicebot wrote:

On Saturday, 26 April 2014 at 21:57:55 UTC, Timon Gehr wrote:

On 04/26/2014 09:27 PM, Daniel Murphy wrote:


We already have a feature to manage conflicts and organisation in D
code - modules!


Named mixin templates are a much closer fit.


Using named mixin templates for pure scope resolution is side effect and
should be discouraged in any reasonable code.


I don't really advocate using named mixin templates directly as much as 
just the same lookup rules.



There are specific D tools
designed for that  from the very beginning


Named mixin templates are also 'designed for scope resolution from the 
very beginning' if that means anything.



and we should use and/or fix those.


They don't fit. You simply cannot have multiple D modules per file.


Re: DIP61: Add namespaces to D

2014-04-26 Thread Timon Gehr via Digitalmars-d

On 04/27/2014 01:11 AM, Dicebot wrote:

On Saturday, 26 April 2014 at 23:05:23 UTC, Timon Gehr wrote:

On 04/27/2014 12:43 AM, Dicebot wrote:

On Saturday, 26 April 2014 at 21:57:55 UTC, Timon Gehr wrote:

On 04/26/2014 09:27 PM, Daniel Murphy wrote:


We already have a feature to manage conflicts and organisation in D
code - modules!


Named mixin templates are a much closer fit.


Using named mixin templates for pure scope resolution is side effect and
should be discouraged in any reasonable code.


I don't really advocate using named mixin templates directly as much
as just the same lookup rules.


Well that wasn't clear from your comments at all, quite the contrary ;)
...


Wtf?


There are specific D tools
designed for that  from the very beginning


Named mixin templates are also 'designed for scope resolution from the
very beginning' if that means anything.


and we should use and/or fix those.


They don't fit. You simply cannot have multiple D modules per file.


I don't see any problem with having lot of files. It is natural way
organizing D code  if you consider protection attributes that define
module as minimal encapsulation unit.


I don't see the point of requiring replication of the namespace 
structure in directories just for the sake of conflating modules and 
namespaces, even though #includes bear a closer resemblance in usage to 
imports than using directives and doing better is basically free because 
the required lookup semantics are already there. Why discuss anything 
but syntax at this point?


Re: DIP60: @nogc attribute

2014-04-26 Thread Timon Gehr via Digitalmars-d

On 04/27/2014 01:32 AM, bearophile wrote:

If I am not missing some more point, what is the best solution?


Before this question gets lost, I'd like to receive some kind of answer.

Thank you,
bearophile


The front end already distinguishes dynamic and static array literals 
(in a limited form), this distinction should simply carry through to 
code generation and static array literals should be allowed in @nogc code.


Re: possible bug in std.conv.parse

2014-04-26 Thread Timon Gehr via Digitalmars-d

On 04/27/2014 01:43 AM, Adam D. Ruppe wrote:

On Saturday, 26 April 2014 at 23:36:28 UTC, ketmar wrote:

this code: std.conv.parse!byte(-128) throws error: Overflow in
integral conversion. but this is obviously not true, as signed byte
can hold such value.


Check your math... the most negative number a signed byte can hold is
-127. The most positive number it can hold is 128, but negating that
wouldn't fit in eight bits.


Check your math. :o)


Re: DIP61: Add namespaces to D

2014-04-26 Thread Timon Gehr via Digitalmars-d

On 04/27/2014 01:59 AM, Andrei Alexandrescu wrote:

On 4/26/14, 4:32 PM, Walter Bright wrote:

Since the namespace keyword doesn't seem to be gaining much traction, an
alternative syntax would be:

 extern (C++, N.M) { void foo(); }

which would be semantically equivalent to the previous:

 extern (C++) namespace N { namespace M { void foo(); }}


Noice. Would the user call it with N.M.foo() or just foo()? -- Andrei


foo(), N.foo() or N.M.foo(). (Name clashes may increase the required 
granularity.)


Re: DIP61: redone to do extern(C++,N) syntax

2014-04-29 Thread Timon Gehr via Digitalmars-d

On 04/29/2014 01:34 PM, Simen Kjærås via Digitalmars-d wrote:


Building on this knowledge:

module foo;

void func();


module bar;

extern(C++, foo) void func();


module prog;

import foo;
import bar;

void main()
{
 // Seems like it's ambiguous between foo.func and bar.foo.func.
 foo.func();
}


It will call foo.func, because the module foo is in scope in module prog 
and hence hides the namespace foo in module bar.


You can already try this today, as the DIP _does not actually introduce 
any new lookup rules_:


module a;
void func(){}
//---
module bar;

mixin template X(){
void func();
}
mixin X foo;
//---
import foo,bar;

void main(){
foo.func();
}

In particular, any problems you find with symbol lookup are actually 
orthogonal to the DIP.


Re: DIP61: redone to do extern(C++,N) syntax

2014-04-29 Thread Timon Gehr via Digitalmars-d

On 04/29/2014 02:01 PM, Steven Schveighoffer wrote:

That is what the DIP says:

Declarations in the namespace can be accessed without qualification in
the enclosing scope if there is no ambiguity. Ambiguity issues can be
resolved by adding the namespace qualifier

Which then  proceeds to show that only the namespace qualifier is needed
to disambiguate the symbol.

-Steve


You seem to be missing the only important statement that the DIP makes 
about symbol disambiguation, that follows straight after those examples:


Name lookup rules are the same as for mixin templates.

(Maybe it should say 'named template mixins'.)


Re: DIP61: redone to do extern(C++,N) syntax

2014-04-29 Thread Timon Gehr via Digitalmars-d

On 04/29/2014 05:52 PM, Steven Schveighoffer wrote:

I am not familiar with the rules.

Perhaps you can just outline for me:

module bar;

extern(C++, foo) void func();

module prog;

import bar;

void main()
{
foo.func(); // works?
}

If this works, then we have a problem.


It does work. What happens is analogous to:

module bar;

void func(){}

module prog;
import bar;

void func(){}

void main(){
func();
}

I.e. if you add a new symbol to your own module, then this identifier 
will be hidden, no questions asked. Importing a module adds a new symbol 
of its name to your module. I'm not sure why you see this as a problem. 
The name lookup rules are designed such that changes in one module 
cannot silently change name lookup in _another_ module, but anything may 
happen to lookup in the same module.



If it doesn't work, well, then I
see nobody using this feature (using C++ namespace for disambiguation,
not the mangling, which is still useful).

-Steve


The name disambiguation feature fundamentally addresses one simple 
issue, namely the case when the same name occurs in multiple different 
namespaces occurring in the same module. (It is trivial to implement in 
a compiler, if that helps, because there is no new machinery, just share 
the implementation with named mixin templates.)


Re: DIP61: redone to do extern(C++,N) syntax

2014-04-29 Thread Timon Gehr via Digitalmars-d

On 04/29/2014 10:49 PM, Steven Schveighoffer wrote:

On Tue, 29 Apr 2014 16:00:43 -0400, Steven Schveighoffer
schvei...@yahoo.com wrote:


On Tue, 29 Apr 2014 15:52:01 -0400, Walter Bright
newshou...@digitalmars.com wrote:



Because the compiler would now issue an error for that, it's its
anti-hijacking feature.

Try it and see!


I agree! that was my central point, which Timon seemed to be arguing
against :)


And in fact, so were you!
...

This is EXACTLY the same code that you said would now be an error above!


Maybe he didn't notice that you changed the 'main' function relative to 
my post. If you don't mention 'foo' explicitly, then obviously it cannot 
be hidden by the import and the code is in error.




I think you guys  need to reexamine this,


Not me. I typically test my claims even if I am sure, if only to file 
bug reports.



and choose one way or another.
At this point, I have no clue as to how it's supposed to work.



Obviously you did not actually try. :P

Again, to make it really easy for you to test the behaviour:

module foo;
import std.stdio;

int func(){ writeln(hello from foo!); return 1; }

//---

module bar;
import std.stdio;

mixin template X(){
int func(){ writeln(hello from bar!); return 2; }
}
mixin X foo;

//---

module prog;

void main(){
void onlybar(){
import bar;
auto r=foo.func(); // hello from bar!
assert(r==2); // pass!
}
void fooandbar(){
import bar,foo;
auto r=foo.func(); // hello from foo!
assert(r==1); // pass!
}
onlybar();
fooandbar();
}


http://dlang.org/download.html

$ dmd prog foo bar  ./prog
hello from bar!
hello from foo!

This is because the import of module 'foo' hides the namespace 'foo' 
imported from 'bar' in the scope of 'fooandbar'. It is not 'func' that 
is being hidden, but 'foo'.




Re: DIP61: redone to do extern(C++,N) syntax

2014-04-30 Thread Timon Gehr via Digitalmars-d

On 04/30/2014 02:41 AM, Steven Schveighoffer wrote:

On Tue, 29 Apr 2014 17:38:07 -0400, Timon Gehr timon.g...@gmx.ch wrote:
...


Maybe he didn't notice that you changed the 'main' function relative
to my post. If you don't mention 'foo' explicitly, then obviously it
cannot be hidden by the import and the code is in error.


I never changed the code. It always was foo.func(). That was my point.
...


(What I mean is that in the post you were answering, main just contained 
'foo'.)






I think you guys  need to reexamine this,


Not me. I typically test my claims even if I am sure, if only to file
bug reports.


Your view has been consistent. I think it defeats the purpose of having
namespaces to disambiguate calls, since the disambiguation itself
conflicts with modules.

If foo.func means one thing today and another thing tomorrow, based on
an import, I think the feature is flawed. Template namespaces are just
as flawed.




and choose one way or another.
At this point, I have no clue as to how it's supposed to work.



Obviously you did not actually try. :P


No, I get what you are saying. The above statement is because I'm
getting conflicting reports from Walter.
...


Ok, apologies.


...

module foo;
import std.stdio;

int func(){ writeln(hello from foo!); return 1; }

//---

module bar;
import std.stdio;

mixin template X(){
 int func(){ writeln(hello from bar!); return 2; }
}
mixin X foo;

//---

module prog;

void main(){
 void onlybar(){
 import bar;
 auto r=foo.func(); // hello from bar!
 assert(r==2); // pass!
 }
 void fooandbar(){
 import bar,foo;
 auto r=foo.func(); // hello from foo!
 assert(r==1); // pass!
 }
 onlybar();
 fooandbar();
}


Wouldn't a similar test be to create a struct for a namespace?
...


Yes, and this behaves the same in this specific case.


The confusing issue here to C++ programmers is, when I specify x::y::z,
it means z in namespace x::y, regardless of where it was imported. If in
D we say you can access this via x.y.z, they are going to think they can
always type that. To have it so easily break is not a good thing.
...


If this is a problem, I guess the most obvious alternatives are to:

1. Get rid of namespace scopes. Require workarounds in the case of 
conflicting definitions in different namespaces in the same file. (Eg. 
use a mixin template.) I'd presume this would not happen often.


2. Give the global C++ namespace a distinctive name and put all other 
C++ namespaces below it. This way fully qualified name lookup will be 
reliable.




This is because the import of module 'foo' hides the namespace 'foo'
imported from 'bar' in the scope of 'fooandbar'. It is not 'func' that
is being hidden, but 'foo'.


In fact, it's the entire foo namespace.
...


Yes, but this is visible in the file that is being changed.


So basically any namespace that matches the root phobos import path will
cause conflicts. You don't suppose any C++ code uses that do you? ;)



How many C++ standard headers contain conflicting symbol names in 
different namespaces? If the namespace symbol really needs to be 
addressable, there could be an alias. eg. alias cpp=std;


Re: D vs Rust: function signatures

2014-04-30 Thread Timon Gehr via Digitalmars-d

On 04/30/2014 04:04 AM, Vladimir Panteleev wrote:


fn map'r, B(self, f: |A|: 'r - B) - Map'r, A, B, Self


Same as points 1 and 3 above (D's version allows specifying multiple
functions).

Not sure what 'r or |A| means in Rust syntax, but I guess this would be
the equivalent D syntax:

auto map(R)(R delegate(T))


|A| - B is the type of a closure mapping an A to a B. 'r is a lifetime 
parameter (there is another one in Self):


I.e. that signature is roughly saying: The returned iterator lives at 
most as long as the closure context of f and the underlying iterator.


This way you can eg. get an iterator over some mutable data structure, 
map it without allocations using a stack closure, and the type system 
verifies that the data structure is not changed while the mapped 
iterator is in use, and that there are no dangling references to stack 
memory left behind.


Re: More radical ideas about gc and reference counting

2014-04-30 Thread Timon Gehr via Digitalmars-d

On 04/30/2014 10:45 PM, Andrei Alexandrescu wrote:


An extreme one indeed, it would break a lot of my code. Every D project
I wrote that does networking manages memory using a class that resides
on the managed heap, but holds the actual wrapped data in the unmanaged
heap.


So should I take it those classes all have destructors? -- Andrei


(Yes, those destructors free the unmanaged memory.)


Re: More radical ideas about gc and reference counting

2014-04-30 Thread Timon Gehr via Digitalmars-d

On 04/30/2014 10:21 PM, Andrei Alexandrescu wrote:

Walter and I have had a long chat in which we figured our current
offering of abstractions could be improved. Here are some thoughts.
There's a lot of work ahead of us on that and I wanted to make sure
we're getting full community buy-in and backup.

First off, we're considering eliminating destructor calls from within
the GC entirely. It makes for a faster and better GC, but the real
reason here is that destructors are philosophically bankrupt in a GC
environment. I think there's no need to argue that in this community.
The GC never guarantees calling destructors even today, so this decision
would be just a point in the definition space (albeit an extreme one).

That means classes that need cleanup (either directly or by having
fields that are structs with destructors) would need to garner that by
other means, such as reference counting or manual. We're considering
deprecating ~this() for classes in the future.
...


struct S{
~this(){ /* ... */ }
/* ... */
}

class C{
S s;
}

?


Also, we're considering a revamp of built-in slices, as follows. Slices
of types without destructors stay as they are.

Slices T[] of structs with destructors shall be silently lowered into
RCSlice!T, defined inside object.d. That type would occupy THREE words,
one of which being a pointer to a reference count. That type would
redefine all slice primitives to update the reference count accordingly.

RCSlice!T will not convert implicitly to void[]. Explicit cast(void[])
will be allowed, and will ignore the reference count (so if a void[]
extracted from a T[] via a cast outlives all slices, dangling pointers
will ensue).

I foresee any number of theoretical and practical issues with this
approach. Let's discuss some of them here.
...


struct S{
~this(){ /* ... */ }
}

class C{
S[] s;
this(S[] s){ /* ... */ }
}

void main(){
new C(buildSs());
// memory leak
}

Also, cycles.


Re: More radical ideas about gc and reference counting

2014-04-30 Thread Timon Gehr via Digitalmars-d

On 04/30/2014 10:58 PM, Andrei Alexandrescu wrote:

On 4/30/14, 1:56 PM, Timon Gehr wrote:


struct S{
 ~this(){ /* ... */ }
 /* ... */
}

class C{
 S s;
}

?


By hand, as I mentioned. -- Andrei



I meant, is it going to be deprecated too?


Re: More radical ideas about gc and reference counting

2014-04-30 Thread Timon Gehr via Digitalmars-d

On 04/30/2014 10:57 PM, Andrei Alexandrescu wrote:



RCSlice!(const T) has nothing to do with const(RCSlice!T) as far as the
compiler is concerned.

...


There's always this hack:

struct RCSlice(T){
static if(!is(T==const))
RCSlice!(const(StripImmutable!T)) upcast(){
return cast(typeof(return))this;
}
alias upcast this;
// ...
}


We need to improve the language to allow for such. Did I mention it's
not going to be easy?


Well, actually an easy way would be to have the compiler infer safe 
coercions.


(A 'coercion' here is meant to denote a no-op implicit conversion, for 
example, conversion to const.)


It would be enabled with an attribute on the template. Coercions between 
different instantiations of the same templated type would be safe and 
allowed if the memory layout does not change between them and the 
compiler can prove the coercion safe field-wise. There needn't be 
restrictions on non-field members. Types annotated like this would get 
the top-level mutability specifier stripped off if possible, during IFTI 
/ when cast().


In particular, this would work:

class C{}
class D:C{}

@infer_coercions
struct Slice(T){
size_t length;
T* ptr;
}

void foo(T)(T arg){ /* ... */ }

void main()@safe{
auto a=[new D,new D];
auto s=Slice!D(a.length,a.ptr);
Slice!(const(C)) sc=s; // ok, D* coercible to const(C)*
const(Slice!C) cs=s; // ok, D* coercible to n-m-ind const(C*)
sc=cs; // ok, D* is coercible to n-m-ind const(C)*
// (length works because const(size_t) and size_t can be
//  freely corced between each other.)

foo(cs); // actually calls foo!(Slice!(const(C))),

// == no more magic in T[]
}

I.e. slices can be implemented in the library under this proposal.
(The above just forwards the magical behaviour of T* to a user-defined 
type.)


I think this fixes the issue in general, for example, for ranges:

const rng1 = [1,2,3].map!(a=2*a);
const rng2 = rng1.filter!(a=!!(a1)); // ok



Probably this should be slightly generalized, eg:

@infer_coercions
struct S(T){
T field;
}

void foo(T)(T arg){}

void main(){
S!(T[]) s;
foo(s); // foo!(S!(const(T)[]))
}

However, then, whether to do const(S!T) = S!(const(T)) or const(S!T) = 
S!(TailConst!T) should maybe be specified on a per-parameter basis, 
because this is in general not easy to figure out for the compiler.





Re: More radical ideas about gc and reference counting

2014-04-30 Thread Timon Gehr via Digitalmars-d

On 05/01/2014 12:24 AM, Timon Gehr wrote:


I think this fixes the issue in general, for example, for ranges:

const rng1 = [1,2,3].map!(a=2*a);
const rng2 = rng1.filter!(a=!!(a1)); // ok



Probably this should be slightly generalized, eg:
...


(The generalization is actually needed for the range example to work.)



Re: More radical ideas about gc and reference counting

2014-04-30 Thread Timon Gehr via Digitalmars-d

On 05/01/2014 12:24 AM, Timon Gehr wrote:

 const(Slice!C) cs=s; // ok, D* coercible to n-m-ind const(C*)
 sc=cs; // ok, D* is coercible to n-m-ind const(C)*


(Ignore the 'n-m-ind', those are accidental leftovers from a half-baked 
version of the post.)


Re: More radical ideas about gc and reference counting

2014-05-01 Thread Timon Gehr via Digitalmars-d

On 05/01/2014 12:48 AM, deadalnix wrote:

On Wednesday, 30 April 2014 at 22:24:29 UTC, Timon Gehr wrote:

However, then, whether to do const(S!T) = S!(const(T)) or const(S!T)
= S!(TailConst!T) should maybe be specified on a per-parameter basis,
because this is in general not easy to figure out for the compiler.


That is the whole problem :D


Well, actually, just check whether there is a field that would need to 
change type, of the exact type of some parameter. If so, try to do the 
second thing for that parameter, otherwise try to do the first.


Re: Scenario: OpenSSL in D language, pros/cons

2014-05-06 Thread Timon Gehr via Digitalmars-d

On 05/05/2014 12:41 PM, Jonathan M Davis via Digitalmars-d wrote:

Regardless, there's
nothing fundamentally limited about @safe except for operations which are
actually unsafe with regards to memory


What does 'actually unsafe' mean? @safe will happily ban statements that 
will never 'actually' result in memory corruption.


Re: From slices to perfect imitators: opByValue

2014-05-08 Thread Timon Gehr via Digitalmars-d

On 05/08/2014 05:58 AM, Andrei Alexandrescu wrote:

...

import std.stdio;

void fun(T)(T x)
{
writeln(typeof(x).stringof);
}

void main()
{
immutable(int[]) a = [ 1, 2 ];
writeln(typeof(a).stringof);
fun(a);
}

This program outputs:

immutable(int[])
immutable(int)[]

which means that the type of that value has subtly and silently changed in the 
process of passing it to a function.
...


This way of stating it is imprecise: What happened is that during 
implicit function template instantiation, T was determined to be 
immutable(int)[] instead of immutable(int[]). Then 'a' was implicitly 
converted from immutable(int[]) to immutable(int)[] just as it would be 
done for any function call with those argument and parameter types.



This change was introduced a while ago (by Kenji I recall) and it enabled a lot 
of code that was gratuitously rejected.
This magic of T[] is something that custom ranges can't avail themselves
of.  In order to bring about parity, we'd need to introduce opByValue
which (if present) would be automatically called whenever the object is
passed by value into a function.
...


There are two independent pieces of magic here, and this proposal 
removes none of them in a satisfactory way:


struct S(T){
HeadUnqual!(typeof(this)) opByValue(){ ... }
...
}

void fun(in S!int a){}

void main(){
const S!int s;
fun(s); // error, cannot implicitly convert expression s.opByValue
// of type S!(const(int)) to const(S!int).
}


A better ad-hoc way of resolving this is opImplicitCast and a special 
opIFTIdeduce member or something like that.


struct S(T){
alias opIFTIdeduce = HeadUnqual!(typeof(this));
S opImplicitCast(S)(typeof(this) arg){ ... }
...
}

But this just means that now every author of a datatype of suitable kind 
has to manually re-implement implicit conversion rules of T[].


A probably even better way is to just allow to specify by annotation 
that some templated datatype should mimic T[]'s implicit conversion 
rules (and without necessarily providing a direct and overpowered hook 
into the IFTI resolution process, though both could be done.)



This change would allow library designers to provide good solutions to
making immutable and const ranges work properly - the way T[] works.
...


Questionable.

const(T)[] b = ...;
const(T[]) a = b; // =)
Range!(const(T)) s = ...;
const(Range!T) r = s; // =(



Re: From slices to perfect imitators: opByValue

2014-05-08 Thread Timon Gehr via Digitalmars-d

On 05/08/2014 08:55 AM, Jonathan M Davis via Digitalmars-d wrote:

As far as I can see, opByValue does the same thing as opSlice, except
that it's used specifically when passing to functions, whereas this code

immutable int [] a = [1, 2, 3];
immutable(int)[] b = a[];

or even

immutable int [] a = [1, 2, 3];
immutable(int)[] b = a;

compiles just fine. So, I don't see how adding opByValue helps us any.
Simply calling opSlice implicitly for user-defined types in the same
places that it's called implicitly on arrays would solve that problem.
We may even do some of that already, though I'm not sure.


Automatic slicing on function call is not what actually happens. You can 
see this more clearly if you pass an immutable(int*) instead of an 
immutable(int[]): there is no way to slice the former and the mechanism 
that determines the parameter type to be immutable(int)* is the same. 
(And indeed doing the opSlice would have undesirable side-effects, for 
example, your pet peeve, implicit slicing of stack-allocated static 
arrays would happen on any IFTI call with a static array argument. :o) 
Also, other currently idiomatic containers couldn't be passed to functions.)


Re: From slices to perfect imitators: opByValue

2014-05-08 Thread Timon Gehr via Digitalmars-d

On 05/08/2014 12:14 PM, Michel Fortin wrote:


Will this solve the problem that const(MyRange!(const T)) is a different
type from const(MyRange!(T))?


No, but as stated it aggravates this problem.


I doubt it. But they should be the same
type if we want to follow the semantics of the language's slices, where
const(const(T)[]) is the same as const(T[]).

Perhaps this is an orthogonal issue,  but I wonder whether a solution to
the above problem could make opByValue unnecessary.


Not necessarily automatically, because there would still need to be a 
way to figure out that actually const(S!T) - S!(const(T)) is the way to 
remove top-level constness. (Because sometimes it is actually 
const(S!(T[])) - S!(const(T)[]), for example, for most ranges in 
std.algorithm.)


But I think the above problem is the fundamental one.


Re: From slices to perfect imitators: opByValue

2014-05-08 Thread Timon Gehr via Digitalmars-d

On 05/08/2014 09:01 AM, Jonathan M Davis via Digitalmars-d wrote:

It really has nothing to do with passing an argument to a function
beyond the fact that that triggers an implicit call to the slice operator.


module check_claim;
import std.stdio;

auto foo(immutable(int)[] a){writeln(Indeed, this is what happens.);}
auto foo(immutable(int[]) a){writeln(No, this is not what happens.);}

void main(){
immutable(int[]) x;
foo(x);
}

$ dmd -run check_claim
No, this is not what happens.


Re: From slices to perfect imitators: opByValue

2014-05-08 Thread Timon Gehr via Digitalmars-d

On 05/08/2014 01:05 PM, monarch_dodra wrote:

...
In fact, I'm wondering if this might not be a more interesting direction
to explore.


http://forum.dlang.org/thread/ljrm0d$28vf$1...@digitalmars.com?page=3#post-ljrt6t:242fpc:241:40digitalmars.com

http://forum.dlang.org/thread/ljrm0d$28vf$1...@digitalmars.com?page=7#post-ljt0mc:24cto:241:40digitalmars.com


Re: From slices to perfect imitators: opByValue

2014-05-08 Thread Timon Gehr via Digitalmars-d

On 05/08/2014 06:02 PM, monarch_dodra wrote:


If you have const data referencing mutable data, then yes, you can cast
away all the const you want, but at that point, it kind of makes the
whole const thing moot.


This is not guaranteed to work. I guess the only related thing that is 
safe to do is casting away const, but then not modifying the memory.


Re: From slices to perfect imitators: opByValue

2014-05-08 Thread Timon Gehr via Digitalmars-d

On 05/08/2014 06:30 PM, Sönke Ludwig wrote:

Am 08.05.2014 18:10, schrieb Timon Gehr:

On 05/08/2014 06:02 PM, monarch_dodra wrote:


If you have const data referencing mutable data, then yes, you can cast
away all the const you want, but at that point, it kind of makes the
whole const thing moot.


This is not guaranteed to work. I guess the only related thing that is
safe to do is casting away const, but then not modifying the memory.


For what practical reason would that be the case? I know that the spec
states undefined behavior,


Case closed.


Re: More radical ideas about gc and reference counting

2014-05-11 Thread Timon Gehr via Digitalmars-d

On 05/11/2014 10:29 AM, Walter Bright wrote:

...

- A Comment on Rust --

This is based on my very incomplete knowledge of Rust, i.e. just reading
a few online documents on it. If I'm wrong, please correct me.
...


Well, region-based memory management is not new, and Rust's approach is 
natural, even more so when given some background.


Have you ever read up on type systems? Eg:

http://www.cis.upenn.edu/~bcpierce/tapl/
http://www.cis.upenn.edu/~bcpierce/attapl/


Rust's designers apparently are well aware of the performance cost of
pervasive ARC. Hence, they've added the notion of a borrowed pointer,
which is an escape from ARC.


It is not an escape from ARC per se. It is a way to write type safe code 
which is not dependent on the allocation strategy of the processed data. 
(One can e.g. safely borrow mutable data as immutable and the type 
system ensures that during the time of the borrow, the data does not 
mutate.)



The borrowed pointer is made memory safe by:

1. Introducing restrictions on what can be done with a borrowed pointer
so the compiler can determine its lifetime. I do not know the extent of
these restrictions.
...


The type system tracks lifetimes across function and data structure 
boundaries. The main restriction is that such a pointer cannot be 
escaped. (But borrowed pointers may still be stored in data structures.)



2. Introducing an annotation to distinguish a borrowed pointer from an
ARC pointer. If you don't use the annotation, you get pervasive ARC with
all the poor performance that entails.
...


No, this is completely inaccurate: Both choices are explicit, and 
reference counting is just one of the possible memory management 
schemes. (Hence borrowed pointers are used unless there is a reason not to.)



Implicit in borrowed pointers is Rust did not solve the problem of
having the compiler eliminate unnecessary inc/dec.
...


Avoiding inc/dec is not what justifies borrowed pointers.



My experience with pointer annotations to improve performance is pretty
compelling - almost nobody adds those annotations. They get their code
to work with the default, and never get around to annotating it.


The 'default' way to pass by reference is by borrowed pointer.


...
They do not mix. A function taking one type of
pointer cannot be called with the other type.
...


A function taking a borrowed pointer can be called with a reference 
counted pointer. Abstracting over allocation strategy is the point of 
borrowing.



Worse, these effects are transitive, making a function hierarchy rather
inflexible.

Are these valid concerns with Rust?


I don't think they are.



Re: More radical ideas about gc and reference counting

2014-05-11 Thread Timon Gehr via Digitalmars-d

On 05/11/2014 10:05 PM, Walter Bright wrote:

On 5/11/2014 8:54 AM, Timon Gehr wrote:

It is not an escape from ARC per se. It is a way to write type safe
code which
is not dependent on the allocation strategy of the processed data.
(One can e.g.
safely borrow mutable data as immutable and the type system ensures
that during
the time of the borrow, the data does not mutate.)


That's clearly an additional benefit of the borrowed pointer notion. But
have you examined generated Rust code for the cost of inc/dec? I
haven't, but I don't see any way they could avoid this (very expensive)
cost without borrowed pointers.
...


Sure, but performance is the additional benefit.




The type system tracks lifetimes across function and data structure
boundaries.
The main restriction is that such a pointer cannot be escaped. (But
borrowed
pointers may still be stored in data structures.)


The idea must be that the borrowed pointer lifetime cannot exceed the
lifetime of the lender.

The thing is, if the compiler is capable of figuring out these lifetimes
by examining the code,


There are explicit lifetime annotations in function signatures.


then there would be no need for the programmer to
specify a pointer as borrowed - the compiler could infer it.


Not modularly. (Yes, global lifetime analysis exists, but this is not 
what they are doing.)



Hence, there have got to be significant restrictions to the point where the
compiler could figure out the rest.
...


It is simply not true that type systems are inherently restricted to 
checking trivial properties. They can be made as strong as mathematical 
logic without much fuss.





2. Introducing an annotation to distinguish a borrowed pointer from an
ARC pointer. If you don't use the annotation, you get pervasive ARC with
all the poor performance that entails.
...

No, this is completely inaccurate: Both choices are explicit,


Yes, one is  and the other is @.


No, actually currently one is  and the other is RCT AFAIK.


This does not change my point.


In my opinion it actually does, to the point of rendering it invalid.


Back in
the bad old DOS days, there was near* and *. The choices were explicit,


Explicit among near* and *. I.e. between adding 'near' and leaving it 
out. Also, if you add 'near' it will not be compatible with pointers 
that do not have it.



but the results were there anyway - people overwhelmingly chose the more
general *, performance be damned.
...


RCT is not more general. It cannot refer to stack-allocated data, for 
instance.





and reference
counting is just one of the possible memory management schemes. (Hence
borrowed
pointers are used unless there is a reason not to.)


Yes, I can see it supports other schemes, but I predict that RC will
dominate, and other schemes will be about as pervasive as using Boehm's
GC with C++.
...


The point is, borrowing does not depend on the allocation scheme, as 
long as it is safe.





Implicit in borrowed pointers is Rust did not solve the problem of
having the compiler eliminate unnecessary inc/dec.

Avoiding inc/dec is not what justifies borrowed pointers.


I find that very hard to believe. It implies that there is little cost
to inc/dec, or the compiler is clever enough to eliminate the bulk of
the inc/dec pairs.
...


Sure, borrowing is very lightweight, but ultimately what is most 
important is that it solves the problem of multiple incompatible pointer 
types and makes the type system more expressive as well.





My experience with pointer annotations to improve performance is pretty
compelling - almost nobody adds those annotations. They get their code
to work with the default, and never get around to annotating it.

The 'default' way to pass by reference is by borrowed pointer.


Time will tell how well having the most restrictive pointer type be the
default works out.
...


A function that uses none of the specific pointer capabilities is more 
general, so what other choice of 'default' makes sense?





They do not mix. A function taking one type of
pointer cannot be called with the other type.

A function taking a borrowed pointer can be called with a reference
counted
pointer. Abstracting over allocation strategy is the point of borrowing.


Right, and a function in DOS taking a * would accept a near*. But the
other way did not work, and so people wrote their functions using *,
performance was thrown under the bus.



Are these valid concerns with Rust?

I don't think they are.


Perhaps you're right. But my experience with DOS programming is
programmers preferred convenience and reusable functions, and hence used
plain * everywhere. And like I said, this provided a huge competitive
advantage for me.
...


Convenience and reusable functions means using borrowed pointers 
whenever possible.



The great boon with 32 bit code was the elimination of special pointer
types, now it was easy to write fast, reusable functions, and the effort
involved in creating efficient data 

Re: More radical ideas about gc and reference counting

2014-05-12 Thread Timon Gehr via Digitalmars-d

On 05/12/2014 02:50 AM, Walter Bright wrote:

On 5/11/2014 1:59 PM, Timon Gehr wrote:

On 05/11/2014 10:05 PM, Walter Bright wrote:

That's clearly an additional benefit of the borrowed pointer notion. But
have you examined generated Rust code for the cost of inc/dec? I
haven't, but I don't see any way they could avoid this (very expensive)
cost without borrowed pointers.

Sure, but performance is the additional benefit.


One constant theme in this thread, one I find baffling, is the regular
dismissal of the performance implications of inc/dec.


Irrelevant, I'm not doing that.

(And again, reference counting is not the only allocation mechanism in 
Rust. AFAICT, most allocations use owned pointers.)



Borrowed pointers are not necessary to support raw pointers  - this can be (and 
is in some
systems) supported by simply wrapping the raw pointer with a dummy
reference count.
...


I have no idea what this part is trying to bring across.


The reason for borrowed pointers is performance.


No, it is safety. Raw pointers give you all of the performance.


Rust would be non-viable without them.



True in that it would fail to meet its design goals. Rust provides a 
tracing garbage collector as well, so it is at least as viable as D 
regarding performance of safe memory management. (Probably more, 
actually, because it does not provide precision-unfriendly constructs 
such as undiscriminated unions.)



I strongly suggest writing a snippet in [[insert your favorite proven
technology RC language here]] and disassembling the result, and have a
look at what inc/dec entails.
...


I don't have trouble seeing the cost of reference counting. (And it's 
you who claimed that this is going to be the only memory allocation 
scheme in use in Rust code.)





The thing is, if the compiler is capable of figuring out these lifetimes
by examining the code,


There are explicit lifetime annotations in function signatures.


Yes, because the compiler cannot figure it out itself, so the programmer
has to annotate.
...


You are saying 'if the compiler is capable of figuring out these 
lifetimes by examining the code, then ...' and then you are saying that 
the compiler is incapable of figuring them out itself. What is it that 
we are arguing here? Are you saying the Rust compiler should infer all 
memory management automatically or that it cannot possibly do that, or 
something else?





It is simply not true that type systems are inherently restricted to
checking
trivial properties. They can be made as strong as mathematical logic
without
much fuss.


Again, Rust would not need borrowed pointers nor the annotations for
them if this knowledge could be deduced by the compiler. Heck, if the
compiler can deduce lifetimes accurately,


It does not deduce anything. It checks that borrowed pointers do not 
outlive their source. Lifetime parameters are used to transport the 
required information across function signatures.



you can get rid of GC and RC,
and just have the compiler insert malloc/free in the right spots.
...


That's a form of GC, and I already acknowledged that global region 
inference exists, but noted that this is not what is used.



Note that there is a Java version that does this partway, sometimes it
will replace a GC object with a stack allocated one if it is successful
in deducing that the object lifetime does not exceed the lifetime of the
function.
...


I know. Also, inference is harder to control and less efficient than 
simply making the relevant information part of type signatures.


http://en.wikipedia.org/wiki/Region-based_memory_management#Region_inference

This work was completed in 1995[9] and integrated into the ML Kit, a 
version of ML based on region allocation in place of garbage collection. 
This permitted a direct comparison between the two on medium-sized test 
programs, yielding widely varying results (between 10 times faster and 
four times slower) depending on how region-friendly the program was; 
compile times, however, were on the order of minutes.





Yes, one is  and the other is @.


No, actually currently one is  and the other is RCT AFAIK.


Then Rust changed again. The document I read on borrowed pointers was
likely out of date, though it had no date on it.
...


Yes, most documents on Rust are at least slightly out of date.




RCT is not more general. It cannot refer to stack-allocated data,
for instance.


So there is no general pointer type that has an unbounded lifetime?
...


How can it be general and have an unbounded lifetime and be safe?




Sure, borrowing is very lightweight, but ultimately what is most
important is
that it solves the problem of multiple incompatible pointer types and
makes the
type system more expressive as well.


Adding more pointer types makes a type system more expressive, by
definition.
...


No. Also, this is irrelevant, because I was highlighting the 
_importance_ of the fact that it does in this case.





A function that uses none of the 

Re: More radical ideas about gc and reference counting

2014-05-12 Thread Timon Gehr via Digitalmars-d

On 05/12/2014 10:54 AM, Walter Bright wrote:

On 5/11/2014 10:57 PM, Marco Leise wrote:

Am Sun, 11 May 2014 17:50:25 -0700
schrieb Walter Bright newshou...@digitalmars.com:


As long as those pointers don't escape. Am I right in that one cannot
store a
borrowed pointer into a global data structure?


Right, and that's the point and entirely positive-to-do™.


This means that a global data structure in Rust has to decide what
memory allocation scheme its contents must use,


Global variables are banned in Rust code outside of unsafe blocks.


and cannot (without tagging) mix memory allocation schemes.
...


Tagging won't help with all memory allocation schemes.


For example, let's say a compiler has internally a single hash table of
strings. With a GC, those strings can be statically allocated, or on the
GC heap, or anything with a lifetime longer than the table's.
But I don't see how this could work in Rust.



It's possible if you don't make the table global.
(OTOH in D this is not going to work at all.)


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Timon Gehr via Digitalmars-d

On 05/12/2014 06:37 PM, Walter Bright wrote:

On 5/12/2014 5:15 AM, Timon Gehr wrote:

On 05/12/2014 10:54 AM, Walter Bright wrote:

On 5/11/2014 10:57 PM, Marco Leise wrote:

Am Sun, 11 May 2014 17:50:25 -0700
schrieb Walter Bright newshou...@digitalmars.com:


As long as those pointers don't escape. Am I right in that one cannot
store a
borrowed pointer into a global data structure?


Right, and that's the point and entirely positive-to-do™.


This means that a global data structure in Rust has to decide what
memory allocation scheme its contents must use,


Global variables are banned in Rust code outside of unsafe blocks.


Global can also mean assigning through a reference passed as a parameter.



Do you mean the table is not actually global but passed by parameter, or 
that the global table is accessed in unsafe code and then passed by 
parameter or something else?


Re: borrowed pointers vs ref

2014-05-12 Thread Timon Gehr via Digitalmars-d

On 05/12/2014 10:36 PM, Walter Bright wrote:

It's been brought up more than once that the 'scope' storage class is an 
unimplemented borrowed pointer. But thinking a bit more along those lines, 
actually 'ref' fills the role of a borrowed pointer.

One particularly apropos behavior is that struct member functions pass 'this' 
by ref, meaning that members can be called without the inc/dec millstone.

ref is still incomplete as far as this goes, but we can go the extra distance 
with it, and then it will be of great help in supporting any ref counting 
solution.

What it doesn't work very well with are class references. But Andrei suggested 
that we can focus the use of 'scope' to deal with that in an analogous way.

What do you think?


I think everything should be treated uniformly. But a storage class is 
not sufficient.




Anyone want to enumerate a list of the current deficiencies of 'ref' in
regards to this, so we can think about solving it?


Eg:

- Cannot make tail const. / Cannot be reassigned.
- Cannot store in data structures.
- Cannot borrow slices of memory.
- Closures?
- (Probably more)


Re: More radical ideas about gc and reference counting

2014-05-13 Thread Timon Gehr via Digitalmars-d

On 05/13/2014 03:56 PM, Dicebot wrote:

On Monday, 12 May 2014 at 19:32:49 UTC, Jacob Carlborg wrote:

On 2014-05-12 19:14, Dicebot wrote:


It lacks any good static reflection though. And this stuff is damn
addictive when you try it of D caliber.


It has macros, that basically requires great support for static
reflection to be usable.


Judging by http://static.rust-lang.org/doc/0.6/tutorial-macros.html
those are not full-blown AST macros like ones you have been proposing,
more like hygienic version of C macros.


https://github.com/huonw/brainfuck_macro/blob/master/lib.rs


Re: Next step on reference counting topics

2014-05-13 Thread Timon Gehr via Digitalmars-d

On 05/13/2014 05:11 PM, Andrei Alexandrescu wrote:

On 5/13/14, 6:41 AM, Dicebot wrote:

On Monday, 12 May 2014 at 19:00:33 UTC, Andrei Alexandrescu wrote:

For that I'm proposing we start real work toward a state-of-the-art
std.refcounted module. It would include adapters for class, array, and
pointer types, and should inform language improvements for qualifiers
(i.e. the tail-const problem), copy elision, literals, operators, and
such.


We don't have language tools to do it. Main problem with implementing
reference counted pointers as structs is that you are forced to abandon
polymorphism:

class A {}
class B : A {}

void foo(RefCounted!A) {}

void main()
{
 RefCounted!B b = new B();
 foo(b); // no way to make it work
}

This severely limits applicability of such library solution, especially
when it comes to something like exceptions.


alias this should be of help here. Could you please try it? Thanks! --
Andrei



class A{
RefCounted!A r(){ ... }
}

class B : A {
override RefCounted!B r(){ ... } // ...
}

RefCounted!B[] b;
RefCounted!(const(A))[] a=b; // ...

etc.


Re: More radical ideas about gc and reference counting

2014-05-13 Thread Timon Gehr via Digitalmars-d

On 05/13/2014 09:07 PM, Jacob Carlborg wrote:

On 2014-05-13 15:56, Dicebot wrote:


Judging by http://static.rust-lang.org/doc/0.6/tutorial-macros.html
those are not full-blown AST macros like ones you have been proposing,
more like hygienic version of C macros.


Hmm, I haven't looked at Rust macros that much.



Again, the following is an example of Rust macros in action. A bf 
program is compiled to Rust code at compile time.


https://github.com/huonw/brainfuck_macro/blob/master/lib.rs

Compile-time computations create an AST which is then spliced. Seems 
full-blown enough to me and not at all like C macros.


Re: More radical ideas about gc and reference counting

2014-05-14 Thread Timon Gehr via Digitalmars-d

On 05/14/2014 06:41 PM, Wyatt wrote:



To me, shared is a type modifier that doesn't implicitely convert to
anything else without casting.


Interesting, maybe this should be clarified better.  Having skimmed back
over chapter 13 of TDPL, my understanding of its semantics are that it
only really enforces atomicity and execution order.  Also, this bit from
near the beginning of 13.12 states:
For all numeric types and function pointers, shared-qualified values
are convertible implicitly to and from unqualified values.

That sounds kind of at-odds with your interpretation...? :/


Not too much. Those are trivial cases (one copies the entire qualified 
data, so obviously one is free to change qualifiers as one wishes).


Re: Memory allocation purity

2014-05-15 Thread Timon Gehr via Digitalmars-d

On 05/15/2014 02:57 PM, Steven Schveighoffer wrote:

On Thu, 15 May 2014 02:50:05 -0400, Ola Fosheim Grøstad
ola.fosheim.grostad+dl...@gmail.com wrote:


On Thursday, 15 May 2014 at 06:29:06 UTC, bearophile wrote:

A little example of D purity (this compiles):



bool randomBit() pure nothrow @safe {
return (new int[1].ptr)  (new int[1].ptr);
}


Yes, and then you may as well allow a random generator with private
globals. Because memoing is no longer sound anyway.


This has nothing to do with allocators being pure. They must be pure as
a matter of practicality.

The issue that allows the above anomaly is using the address of
something. There is a reason functional languages do not allow these
types of things. But in functional languages, allocating is allowed.
...


Not really, allocation is just an implementation detail. The 
computational language is meaningful independent of how you might 
achieve evaluation of expressions. I can in principle evaluate an 
expression of such a language on paper without managing a separate 
store, even though this usually will help efficiency.


Functional programming languages are not about taking away features from 
a procedural core language. They are based on the idea that the 
fundamental operation is substitution of terms.



To be honest, code that would exploit such an anomaly is only ever used
in proof exercises, and never in real code.


Hashtables are quite real.


I don't think it's an issue.

-Steve


This is the issue:

On Thu, 15 May 2014 10:48:07 +
Don via Digitalmars-d digitalmars-d@puremagic.com wrote:



Yes. 'strong pure' means pure in the way that the functional
language crowd means 'pure'.
'weak pure' just means doesn't use globals.


There is no way to make that claim precise in an adequate way such that 
it is correct.


Re: Memory allocation purity

2014-05-15 Thread Timon Gehr via Digitalmars-d

On 05/15/2014 05:24 PM, Andrei Alexandrescu wrote:

On 5/15/14, 3:31 AM, luka8088 wrote:

Yeah, I read all about weak/string purity and I do understand the
background. I was talking about strong purity, maybe I should pointed
that out.

So, to correct myself: As I understood strong purity implies
memoization. Am I correct?


Yes, as long as you don't rely on distinguishing objects by address.

Purity of allocation is frequently assumed by functional languages


Examples?


because without it it would be difficult to get much work done.


Why?


Then,
most functional languages make it difficult or impossible to distinguish
values by their address.


If it's not impossible because of lack of the concept it's not a pure 
functional language.



In D that's easy. A D programmer needs to be
aware of that, and I think that's fine.


Andrei




It's not fine: The spec claims that this problem does not exist.

http://dlang.org/function.html

... and in cases where the compiler can guarantee that a pure function 
cannot alter its arguments, it can enable full, functional purity (i.e. 
the guarantee that the function will always return the same result for 
the same arguments)


Re: Memory allocation purity

2014-05-15 Thread Timon Gehr via Digitalmars-d

On 05/15/2014 11:45 AM, Don wrote:


No global state is a deep, transitive property of a function.
Memoizable is a superficial supersetextra


Why? A memoizable function is still memoizable if it is changed 
internally to memoize values in global memory, for example.



property which the compiler
can trivially determine from @noglobal.


Re: Memory allocation purity

2014-05-15 Thread Timon Gehr via Digitalmars-d

On 05/15/2014 07:41 PM, Steven Schveighoffer wrote:


Not really, allocation is just an implementation detail. The
computational language is meaningful independent of how you might
achieve evaluation of expressions. I can in principle evaluate an
expression of such a language on paper without managing a separate
store, even though this usually will help efficiency.

Functional programming languages are not about taking away features
from a procedural core language. They are based on the idea that the
fundamental operation is substitution of terms.


But they do not deal in explicit pointers.


(Well,

http://hackage.haskell.org/package/base-4.7.0.0/docs/Foreign-Marshal-Alloc.html#v:malloc 
)



Otherwise, that's a can of
worms that would weaken the guarantees, similar to how D's guarantees
are weakened.
...


The concept of a 'pointer' to some primitive value does not have a 
meaning in such languages. Every value is an rvalue.



We have no choice in D, we must accept that explicit pointers are used.
...


We could e.g. ban comparing immutable references for identity / 
in-memory order.



To be honest, code that would exploit such an anomaly is only ever used
in proof exercises, and never in real code.


Hashtables are quite real.


Pretend I'm ignorant, what does this imply?
...


E.g. order of iteration may be dependent on in-memory order of keys and 
the 'same' keys might occur multiple times.



This is the issue:

On Thu, 15 May 2014 10:48:07 +
Don via Digitalmars-d digitalmars-d@puremagic.com wrote:



Yes. 'strong pure' means pure in the way that the functional
language crowd means 'pure'.
'weak pure' just means doesn't use globals.


There is no way to make that claim precise in an adequate way such
that it is correct.


This doesn't help, I'm lost with this statement.

-Steve


The issue is that there are apparently different expectations about what 
the keyword is supposed to do.




Re: Memory allocation purity

2014-05-15 Thread Timon Gehr via Digitalmars-d

On 05/15/2014 11:33 PM, Walter Bright wrote:

On 5/15/2014 9:07 AM, Timon Gehr wrote:

Why? A memoizable function is still memoizable if it is changed
internally to
memoize values in global memory, for example.


I doubt a compiler could prove it was pure.



Yes, that was actually my point. Memoizable is actually a non-trivial 
property.


(But note that while a compiler cannot in general discover a proof, it 
could just _check_ a supplied proof.)


Re: Memory allocation purity

2014-05-15 Thread Timon Gehr via Digitalmars-d
On 05/15/2014 03:06 PM, Ola Fosheim Grøstad 
ola.fosheim.grostad+dl...@gmail.com wrote:

On Thursday, 15 May 2014 at 12:37:22 UTC, w0rp wrote:

To consider the design of pure, you must first consider that you
cannot add functional purity to an imperative language.


Of course you can. Functional languages execute in an imperative
context.  That's why you need monads.
...


Strictly speaking you don't need monads, they are sometimes just an 
adequate way of structuring a program.



The term pure function is only needed in a non-functional language.
Applicative/functional languages only have mathematical functions, no
need for the term pure there.


In discussions about e.g. Haskell, it is often used to denote an 
expression of a specific form inside a `stateful' DSL. E.g. if η is 
the unit of some monad, then (η v) is sometimes called a pure value, 
while values of other forms are not called pure.


Re: Memory allocation purity

2014-05-15 Thread Timon Gehr via Digitalmars-d

On 05/15/2014 08:03 PM, Andrei Alexandrescu wrote:


Purity of allocation is frequently assumed by functional languages


Examples?


cons 1 2 is equal to cons 1 2
...


I don't see anything whose specification would need to mention 'allocation'.


because without it it would be difficult to get much work done.


Why?


It's rather obvious. You've got to have the ability to create new values
in a pure functional programming language.


This kind of operational reasoning is not essential. Of course, in 
practice you want to evaluate expressions, but the resulting programs 
are of the same kind as those of a non-pure language, and can do the 
same kind of operations. There is not really a distinction to be made at 
that level of abstraction.


Re: Memory allocation purity

2014-05-16 Thread Timon Gehr via Digitalmars-d

On 05/16/2014 01:00 AM, H. S. Teoh via Digitalmars-d wrote:

On Thu, May 15, 2014 at 03:22:25PM -0700, Walter Bright via Digitalmars-d wrote:

On 5/15/2014 2:41 PM, Timon Gehr wrote:

On 05/15/2014 11:33 PM, Walter Bright wrote:

On 5/15/2014 9:07 AM, Timon Gehr wrote:

Why? A memoizable function is still memoizable if it is changed
internally to memoize values in global memory, for example.


I doubt a compiler could prove it was pure.



Yes, that was actually my point. Memoizable is actually a non-trivial
property.

(But note that while a compiler cannot in general discover a proof,
it could just _check_ a supplied proof.)


If the compiler cannot mechanically verify purity, the notion of
purity is rather useless, since as this thread shows it is incredibly
easy for humans to be mistaken about it.


What if the language allowed the user to supply a proof of purity, which
can be mechanically checked?

Just rephrasing what Timon said, though -- I've no idea how such a thing
might be implemented. You'd need some kind of metalanguage for writing
the proof in, perhaps inside a proof block attached to the function
declaration, that can be mechanically verified to be correct.


Yes, either that or one could even just implement it in the existing 
language by introducing types for evidence, and basic termination checking.


eg. http://dpaste.dzfl.pl/33018edab028
(This is a really basic example. Templates or more language features 
could be used to simplify some of the more tedious steps, but I think it 
illustrates well the basic ideas. Maybe there are some small mistakes 
because I didn't find the time to actually implement the checker.)




Alternatively, if only one or two statements are causing trouble, the
proof can provide mechanically checkable evidence just for those
specific statements.

The metalanguage must be mechanically checkable, of course. But this may
be more plausible than it sounds -- for example, solutions to certain
NP-complete problems are verifiable within a far shorter time than it
would take to actually solve said problem.


Indeed checking whether there is a proof of some fact up to some 
specific length N for a powerful enough logic is an NP-complete problem. 
(If N is encoded in unary.)



This suggests that perhaps,
while the purity of an arbitrary piece of code is, in general,
infeasible for the compiler to mechanically prove, it may be possible in
some cases that the compiler can mechanically verify a user-supplied
proof, and thus provide the same guarantees as it would for
directly-provable code.


T



In fact, this would cover most cases. Usually there is some checkable 
reason why a piece of code is correct (because otherwise it would not 
have been invented in the first place.)


Re: Memory allocation purity

2014-05-16 Thread Timon Gehr via Digitalmars-d

On 05/16/2014 01:56 AM, Walter Bright wrote:

On 5/15/2014 4:00 PM, H. S. Teoh via Digitalmars-d wrote:

What if the language allowed the user to supply a proof of purity, which
can be mechanically checked?


I think those sorts of things are PhD research topics.


Well, feasibility has long ago been demonstrated and I hope those ideas 
will eventually see general adoption.



It's a bit beyond the scope of what we're trying to do with D.



Sure, but it still makes sense to be aware of and think about what would 
be possible. (Otherwise it is too tempting to get fully sold on inferior 
technology, based on the mistaken assumption that there is no way to do 
significantly better.)


Re: Memory allocation purity

2014-05-17 Thread Timon Gehr via Digitalmars-d

On 05/16/2014 07:41 PM, Andrei Alexandrescu wrote:

On 5/16/14, 4:53 AM, Timon Gehr wrote:

...

Yes, either that or one could even just implement it in the existing
language by introducing types for evidence, and basic termination
checking.

eg. http://dpaste.dzfl.pl/33018edab028


 On 5/16/14, 4:53 AM, Timon Gehr wrote:

(This is a really basic example. Templates or more language features
could be used to simplify some of the more tedious steps, but I think
it illustrates well the basic ideas. Maybe there are some small mistakes
because I didn't find the time to actually implement the checker.)




Typo: int_leibiz_equality :o). -- Andrei


If that is everything, then I am in good shape! :o)


Re: Memory allocation purity

2014-05-17 Thread Timon Gehr via Digitalmars-d

On 05/17/2014 09:29 PM, Timon Gehr wrote:






Typo: int_leibiz_equality :o). -- Andrei


If that is everything, then I am in good shape! :o)


It could be argued though, that this axiom was not too aptly named in 
the first place, because it describes the indiscernibility of identicals 
instead of the identity of indiscernibles.


Re: Memory allocation purity

2014-05-19 Thread Timon Gehr via Digitalmars-d

On 05/19/2014 09:03 PM, Dicebot wrote:


immutable(Object*) alloc() pure
{
 return new Object();
}

bool oops() pure
{
 auto a = alloc();
 auto b = alloc();
 return a is b;
}

This is a snippet that will always return `true` if memoization is at
work and `false` if strongly pure function will get actually called
twice. If changing result of your program because of silently enabled
compiler optimization does not indicate a broken compiler I don't know
what does.


Furthermore, it may not at all be obvious that this is happening: After 
all, purity can be inferred for template-heavy code, and comparing 
addresses will not prevent purity inference.


Re: To deadalnix

2014-05-24 Thread Timon Gehr via Digitalmars-d

On 05/24/2014 05:03 AM, Joshua Niehus wrote:

watching your talk was like witnessing Fermats last theorem being proven...

the scheduler solution was brilliant and the semantic analysis of a
mixin statement that resulted in a comprehensible error message blew my
mind.

Here is a belated applause that should have happened during those slides.

Well done.

josh


Unfortunately, I missed the talk. Sounds promising (I'm using an 
explicit scheduler component for semantic analysis as well.) What does 
comprehensible mixin error message mean exactly?


Re: Ref counting for CTFE?

2014-05-29 Thread Timon Gehr via Digitalmars-d

On 05/29/2014 07:33 PM, Steven Schveighoffer wrote:


But CTFE is full of code that expects to have a GC running, e.g. string
concatenation for mixins, etc.


Even the following code runs out of memory on my machine:

int foo(){
foreach(i;0..1){}
return 2;
}
pragma(msg, foo());

I.e. incrementing the loop counter consumes memory.


Re: Ref counting for CTFE?

2014-05-29 Thread Timon Gehr via Digitalmars-d

On 05/29/2014 06:53 PM, Dylan Knutson wrote:

...

Is there anything so radically different in D than these other
languages, that prevents the implementation of a run-of-the-mill VM to
eval D code?


No. (In fact, I've written a naive but mostly complete byte code 
interpreter in half a week or so last year, as part of an ongoing 
recreational D front end implementation effort.)



It just seems strange to me that it's such a problem when
this is basically solved by all scripting languages. And I'm really not
trying to downplay the difficulty in implementing CTFE in D, but rather
just figure out why it's so hard to implement in comparison.


CTFE is somewhat intertwined with semantic analysis, which makes it a 
little harder to specify/implement than usual interpreters. However, the 
performance problem is mostly a structural issue of the current 
implementation: DMDs CTFE interpreter gradually grew out of its constant 
folder in some kind of best effort fashion as far as I understand.


It is feasible to do everything in the usual fashion and occasionally 
just pause or restart interpretation at well-defined points where it 
needs to interface with semantic analysis.


Re: [OT] Apple introduces Swift as Objective-C sucessor

2014-06-02 Thread Timon Gehr via Digitalmars-d

On 06/02/2014 11:15 PM, Steven Schveighoffer wrote:


They have template constraints similar to D. It looks something like this:

func fooT where T == Int(t: T)

I think everything after the where can be some condition, but I don't
know how expressive that is. The examples aren't very telling.
...


https://developer.apple.com/library/prerelease/ios/documentation/Swift/Conceptual/Swift_Programming_Language/GenericParametersAndArguments.html#//apple_ref/doc/uid/TP40014097-CH37-XID_774

You can have subtype and equality constraints on generic type parameter 
and their associated types.



I still haven't figured out whether generics are runtime-based or
compile-time based.


It is actual parametric polymorphism and not a form of macro if that is 
what you mean. I.e. the declaration is type checked only once.



But some form of generics will be very nice to have.





Re: [OT] Apple introduces Swift as Objective-C sucessor

2014-06-02 Thread Timon Gehr via Digitalmars-d

On 06/02/2014 10:23 PM, Nick Sabalausky wrote:

On 6/2/2014 3:49 PM, Steven Schveighoffer wrote:

On Mon, 02 Jun 2014 15:45:28 -0400, Paulo Pinto pj...@progtools.org
wrote:


More information now made available

https://developer.apple.com/swift/


Memory is managed automatically, and you don’t even need to type
semi-colons.

...



Heh, yea, that's the #1 thing that jumped out at me as well. Sounds like
it probably sums up a lot about the language.
...


Well, it's a sales pitch. I skipped right to the language reference, but 
there seemed nothing to be there that would support the statement


Swift is the result of the latest research on programming languages,

But more annoyingly,

Syntax is tuned to make it easy to define your intent — for example, 
simple three-character keywords define a variable (var) or constant (let).


i.e. Swift is another language that is confused about what 'variable' 
and 'constant' mean.



The whole thing sounds like We love how JS and Python allow mediocre
programmers to churn out bad software quickly, but since we *are* Apple
and love strategic lock-in just as much as MS, here's our own goofy
version of the exact same thing that we're going to push instead.



It is typed.


Re: @safe inference fundamentally broken

2014-06-05 Thread Timon Gehr via Digitalmars-d

On 06/05/2014 08:09 PM, Steven Schveighoffer wrote:


A quick example:

T[] getBuf(T)() @safe
{
T[100] ret;
auto r = ret[];
return r;
}

void main() @safe
{
auto buf = getBuf!int();
}

Note, the above compiles. An interesting thing here is that we have explicitly 
marked getBuf as @safe. So what if we want to remove that? It STILL compiles, 
because the compiler infers @safety!


AFAIK it's a well known problem.



This situation is very bad. I personally think that we need to make
slicing a stack-allocated array INVALID in @safe code,


This issue is not a fundamental problem with @safe, nor is it a matter
of personal opinion, this is simply a compiler bug.


and not let that
code be inferred safe. We have already demonstrated an easy way to make
an internal delegate that can be @trusted for one line. That should be
used to work around this limitation.

I propose that we start migrating towards making slicing of stack data
un-@safe, first by making it a warning, enabled with -w. Then making it
an error.

Thoughts?


The fundamental issue seems to lie in methodology and it is that @safe 
is approximated by the DMD implementation from the wrong side. Instead 
of gradually banning usage of more and more constructs in @safe, the 
implementation should have started out with not allowing any constructs 
in @safe code and then should have gradually allowed more and more 
manually verified to be memory safe constructs.




Re: @safe inference fundamentally broken

2014-06-05 Thread Timon Gehr via Digitalmars-d

On 06/05/2014 08:17 PM, deadalnix wrote:

@safe is fundamentally broken.


What's the fundamental problem? The construct seems perfectly fit to 
specify memory safety in at least the following context:


void main()@safe{}

:o)


Re: @safe inference fundamentally broken

2014-06-05 Thread Timon Gehr via Digitalmars-d

On 06/05/2014 08:47 PM, deadalnix wrote:

On Thursday, 5 June 2014 at 18:33:22 UTC, Timon Gehr wrote:

On 06/05/2014 08:17 PM, deadalnix wrote:

@safe is fundamentally broken.


What's the fundamental problem? The construct seems perfectly fit to
specify memory safety in at least the following context:

void main()@safe{}

:o)


Many constructs are assumed to be @safe on basis @safe don't
guarantee.

T[] arr = [ ... ];
arr = arr[$ .. $];
auto garbage = *(arr.ptr);

@safe is as safe a a condom with holes poked in it.


I see this not as a fundamental problem with @safe, but with how its 
implementation has been approached.


Re: @safe inference fundamentally broken

2014-06-05 Thread Timon Gehr via Digitalmars-d

On 06/05/2014 08:35 PM, Steven Schveighoffer wrote:

On Thu, 05 Jun 2014 14:30:49 -0400, Timon Gehr timon.g...@gmx.ch wrote:


The fundamental issue seems to lie in methodology and it is that @safe
is approximated by the DMD implementation from the wrong side. Instead
of gradually banning usage of more and more constructs in @safe, the
implementation should have started out with not allowing any
constructs in @safe code and then should have gradually allowed more
and more manually verified to be memory safe constructs.


I think I was one of those who argued to do it gradually. I was wrong.


I don't understand. Both strategies are gradual.


When one is manually marking @safe things, it's not as bad as when the
compiler is automatically marking them. But in either case, @safe
doesn't really mean safe, so it is pretty much useless.

-Steve


Actually there are two conflicting definitions of @safe in the 
documentation. One says that @safe means memory safe and the other says 
that @safe means what DMD does:


dlang.org/function.html

Function Safety

Safe functions are functions that are statically checked to exhibit no 
possibility of undefined behavior. Undefined behavior is often used as a 
vector for malicious attacks.


Safe Functions

Safe functions are marked with the @safe attribute.

The following operations are not allowed in safe functions:

[ too short list ]

It then goes on to note:

Note: The verifiable safety of functions may be compromised by bugs in 
the compiler and specification. Please report all such errors so they 
can be corrected.


This is simply the wrong way to go about it.

The documentation should instead say:

The following operations are _allowed_ in safe functions:


Re: Is Bug 5710 likely to get fixed?

2014-06-05 Thread Timon Gehr via Digitalmars-d

On 06/05/2014 08:42 PM, Steven Schveighoffer wrote:

On Thu, 05 Jun 2014 14:16:06 -0400, Russel Winder via Digitalmars-d
digitalmars-d@puremagic.com wrote:


https://issues.dlang.org/show_bug.cgi?id=5710

Is this likely to get fixed or is it more likely to drift along as an
unfixed issue?


If Kenji cannot fix it, it doesn't look good for getting fixed...
...


Why?


It would be nice, but I agree with the statement that it shouldn't force
the creation of a new delegate type.

-Steve


There is _absolutely no reason whatsoever_ to think about creation of a 
new delegate type. The entire argument around performance implications 
for existing code and creation of new delegate types is completely 
nonsensical and both David and Martin have already pointed this out.


Re: @safe inference fundamentally broken

2014-06-05 Thread Timon Gehr via Digitalmars-d

On 06/06/2014 01:15 AM, Walter Bright wrote:

On 6/5/2014 11:09 AM, Steven Schveighoffer wrote:

Thoughts?


Please file bugzilla issue(s) for this.



There already is one:
https://issues.dlang.org/show_bug.cgi?id=8838


Re: [OT] Extra time spent

2014-06-06 Thread Timon Gehr via Digitalmars-d

On 06/06/2014 04:37 PM, H. S. Teoh via Digitalmars-d wrote:

Yeah that sounds very familiar. A typical situation at my job goes
something like this:

Customer: I want feature X!
Sales rep: OK, we'll implement X in 1 month.
Customer: No, I want it by last month!
Sales rep: OK, and we'll throw in feature Y too, at no extra charge.
(Later)
Sales rep (to coders): Here's a new project for you: implement X and Y.
Coders: That sounds really complicated! It will take us 2 months.
Sales rep: What?! We don't have 2 months! They want this by*last*  month!
Coders: That's impossible. Even the quickest hack we can do will take 1
month.
Sales rep: This is a huge customer and it's going to cost us a billion
dollar deal! You have to*make*  it work!
Coders: sigh... OK, 3 weeks.
Sales rep: No, yesterday.
Coders: Fine, tomorrow we'll make a paper-n-glue model.
Sales rep: Today.
Coders: Sigh...


Isn't the fundamental problem here that the customer will pay a billion 
dollars even if the software ends up being full of bugs?


Re: How to best translate this C++ algorithm into D?

2014-06-07 Thread Timon Gehr via Digitalmars-d

On 06/07/2014 05:50 AM, logicchains wrote:


While my (much more concise; thanks D!) attempt at implementing it is:

forest_t[] meal(in forest_t[] forests) {
   forest_t[3] possible_meals = [{-1, -1, +1}, {-1, +1, -1}, {+1, -1, -1}];
   return map!(a = [possible_meals[0]+a, possible_meals[1]+a, 
possible_meals[2]+a])(forests)
 .join
 .filter!(a = !forest_invalid(a)).array
 .sort.uniq.array;
}

Any suggestions for how I could make the D code do the same as the C++
standard transform? Some sort of flatmap could be useful, to avoid the
need for a join.


You could use map instead of an explicit array literal doing manual 
mapping eagerly and I'm quite confident this accounts for much of the 
performance difference.



It'd also be nice if there was a way to sort/uniq the
filterresults directly without having to convert to an array first.


You could use 'partition' instead of 'filter'.


Re: How to best translate this C++ algorithm into D?

2014-06-07 Thread Timon Gehr via Digitalmars-d

On 06/07/2014 03:04 PM, logicchains wrote:

...

  return map!(a = [forest_t(-1, -1, +1)+a, forest_t(-1, +1, -1)+a,
forest_t(+1, -1, -1)+a])(forests)
   .join
   .partition!(forest_invalid)
   .sort.uniq.array;



What about (untested)?:

static forest_t[] possible_meals = [{-1, -1, +1}, {-1, +1, -1}, {+1, -1, 
-1}];

return forests.map!(a=possible_meals.map!(b=b+a))
.join.partition!forest_invalid.sort.uniq.array;

(Of course, this still allocates two arrays.)


Re: Tail pad optimization, cache friendlyness and C++ interrop

2014-06-11 Thread Timon Gehr via Digitalmars-d

On 06/11/2014 11:35 AM, Walter Bright wrote:


I'm not so sure about that, either. There are many ways of bit copying
structs, and some of them are perfectly memory safe.



It is not provable by the compiler, therefore it is not @safe.


Not memory safe implies (is supposed to imply) not @safe but not @safe 
does not imply not memory safe.


Re: foreach

2014-06-12 Thread Timon Gehr via Digitalmars-d

On 06/12/2014 10:06 PM, Steven Schveighoffer wrote:




I don't think it's as trivial as you imply. You have to use a symbol
that's valid, but isn't used in the subsequent loop to avoid collisions.

-Steve


Why would implicit local variables need unique names? This can be 
implemented by preventing identifiers named '_' to be inserted into the 
symbol table.


Re: foreach

2014-06-13 Thread Timon Gehr via Digitalmars-d

On 06/12/2014 06:59 PM, bearophile wrote:

Nick Treleaven:


there is also this usage:

foreach (i, _; range){...}


I think this is a very uncommon usage. I think I have not used it so far.


Have you ever used void[0][T]?


Re: foreach

2014-06-13 Thread Timon Gehr via Digitalmars-d

On 06/13/2014 01:55 PM, Dicebot wrote:

Over 50 comments about minor syntax issue ...


Including yours.


Re: foreach

2014-06-13 Thread Timon Gehr via Digitalmars-d

On 06/13/2014 07:39 PM, Meta wrote:

On Friday, 13 June 2014 at 17:05:26 UTC, H. S. Teoh via Digitalmars-d
wrote:

I don't like arbitrary constants like the `true` in while(true) -- it
kinda goes against the grain, that while implies there is a stopping
point, but sticking true in there contradicts this notion and is
therefore dissonant with the concept of while.


while (1 == 1)
{
}

Is a better indicator of an infinite loop than for(;;) unless you're
really bad at math.


Why do you claim he is bad at math?


Re: Null pointer dereferencing in D

2014-06-13 Thread Timon Gehr via Digitalmars-d

On 06/13/2014 11:45 PM, Jonathan M Davis via Digitalmars-d wrote:

On Fri, 13 Jun 2014 21:23:00 +
deadalnix via Digitalmars-d digitalmars-d@puremagic.com wrote:


The approach consisting in having non nullable pointers/reference
by default is the one that is gaining traction and for good
reasons.


That interacts _really_ badly with D's approach of requiring init values for
all types. We have enough problems with @disable this() as it is.

- Jonathan M Davis



@disable this() and nested structs etc. Trying to require init values 
for everything isn't an extraordinarily good idea. It roughly extends 
'nullable by default' to all _structs_ with non-trivial invariants.


Re: foreach

2014-06-14 Thread Timon Gehr via Digitalmars-d

On 06/13/2014 11:41 PM, Jonathan M Davis via Digitalmars-d wrote:

It's a special case in that the middle portion is supposed to be the condition
that the loop use to determine whether it can continue, and omitting it means
that it has to add the true itself,


No, omitting it means that it does not need to check a condition in the 
first place.


Re: Null pointer dereferencing in D

2014-06-14 Thread Timon Gehr via Digitalmars-d

On 06/14/2014 02:39 AM, Jonathan M Davis via Digitalmars-d wrote:

On Sat, 14 Jun 2014 00:34:51 +0200
Timon Gehr via Digitalmars-d digitalmars-d@puremagic.com wrote:


On 06/13/2014 11:45 PM, Jonathan M Davis via Digitalmars-d wrote:

On Fri, 13 Jun 2014 21:23:00 +
deadalnix via Digitalmars-d digitalmars-d@puremagic.com wrote:


The approach consisting in having non nullable pointers/reference
by default is the one that is gaining traction and for good
reasons.


That interacts _really_ badly with D's approach of requiring init values
for all types. We have enough problems with @disable this() as it is.

- Jonathan M Davis



@disable this() and nested structs etc. Trying to require init values
for everything isn't an extraordinarily good idea. It roughly extends
'nullable by default' to all _structs_ with non-trivial invariants.


True, some types become problematic when you have to have an init value (like
a NonNullable struct to make nullable pointers non-nullable), but generic code
is way more of a pain to write when you can't rely on an init value existing,


Examples?


and there are a number of places that the language requires an init value
(like arrays),


Just use std.array.array.


making types which don't have init values problematic to use.


The solution is to have explicit nullable types, not to force a default 
value on every type.



Overall, I think that adding @disable this() to the language was a mistake.
...






Re: foreach

2014-06-14 Thread Timon Gehr via Digitalmars-d

On 06/14/2014 05:33 PM, bearophile wrote:

Timon Gehr:


Have you ever used void[0][T]?


I have never used that so far. What is it useful for?

Bye,
bearophile


It's a hash set with a somewhat awkward interface.


Re: foreach

2014-06-14 Thread Timon Gehr via Digitalmars-d

On 06/14/2014 11:23 PM, Jonathan M Davis via Digitalmars-d wrote:

...
It's the lack of a condition that I object to. IMHO it's a fundamental
violation of how loops work.


Fundamentally, loops loop. That is the only fundamental thing about loops.


Re: Tail pad optimization, cache friendlyness and C++ interrop

2014-06-15 Thread Timon Gehr via Digitalmars-d

On 06/15/2014 10:33 AM, Walter Bright wrote:


What Timon is saying is that not all memory safe code is verifiably
@safe.


In D, they are defined to be the same thing,


Since when?

http://dlang.org/function

Function Safety

Safe functions are functions that are _statically checked_ to exhibit 
_no possibility of undefined behavior_. Undefined behavior is often used 
as a vector for malicious attacks.


Safe Functions

Safe functions are marked with the @safe attribute.

The following operations are not allowed in safe functions:

I.e. the documentation has two (conflicting) definitions and none of 
them is the one you claim there is.



so the statement makes no sense.


Then please indicate how to fix the documentation. If you are going to 
claim the Humpty Dumpty privilege, I'll back off. Thanks.


On 06/11/2014 11:35 AM, Walter Bright wrote:

What's not provable? Why would bit copying a struct not be memory safe?


Since you claim memory safe is the same as verifiably @safe, you are asking:


What's not provable? Why would bit copying a struct not be verifiably @safe?


struct S{ int x; }

void main()@safe{
S s,t;
memcpy(s,t,S.sizeof); // error
}

So, what is it that you are trying to bring across?


Re: Tail pad optimization, cache friendlyness and C++ interrop

2014-06-15 Thread Timon Gehr via Digitalmars-d

On 06/15/2014 08:44 PM, Walter Bright wrote:

On 6/15/2014 2:48 AM, Timon Gehr wrote:

On 06/15/2014 10:33 AM, Walter Bright wrote:


What Timon is saying is that not all memory safe code is verifiably
@safe.


In D, they are defined to be the same thing,


Since when?

http://dlang.org/function
 ...


so the statement makes no sense.


Then please indicate how to fix the documentation. If you are going to
claim the
Humpty Dumpty privilege, I'll back off. Thanks.



I don't know why the documentation says that. D's @safe is about memory
safety, not undefined behavior.
...


Note that this is frustratingly unhelpful for deciphering your point 
about memory safe = verifiably @safe by definition. Are you 
defining memory safe or verifiably @safe?



Note that the list of eschewed behaviors are possibly memory corrupting.


It is an incomplete list. I'd rather see an incomplete list of _allowed_ 
behaviours instead of an incomplete list of eschewed behaviours. In any 
case, I don't see how any list of (syntactic) verifiable properties of 
the code is supposed to be equivalent to the (non-trivial semantic) 
memory safety property. Are you assuming @trusted as an oracle for 
memory safety and saying @trusted code is 'verifiably @safe' code? 
(That's not the intended reading.)



Signed integer overflow, for example, is not listed.



Are you trying to say that signed integer overflow is undefined 
behaviour in D?  (This would again contradict the documentation.)


Re: DIP63 : operator overloading for raw templates

2014-06-15 Thread Timon Gehr via Digitalmars-d

On 06/15/2014 08:32 PM, Dicebot wrote:

http://wiki.dlang.org/DIP63

This is solution for a problem I am currently having with implementing
http://wiki.dlang.org/DIP54 (afair it was also mentioned by Timon Gehr
during old discussion of that DIP)

New proposed semantics ( to catch your attention and get to read the
link ;) ):

template Pack(T...)
{
 alias expand = T;

 alias opIndex(size_t index) = T[index];
 alias opSlice(size_t lower, size_t upper) = Pack!(T[lower..upper]);
 alias opDollar = T.length;
}

// no ambiguity as Pack!(int, int) is not a valid type
// is(element == int)
alias element = Pack!(int, int)[1];


LGTM. Maybe you can add something along the following lines as another 
motivating use case:


struct Tuple(T...){
T expand;
template Pack(){
auto opSlice(size_t lower, size_t upper){
return tuple(expand[lower..upper]);
}
}
alias Pack!() this;
}
auto tuple(T...)(T args){ return Tuple!T(args); }

void main(){
Tuple!(double,int,string) t1=tuple(1.0,2,three);
auto t2=t1[1..$];
static assert(is(typeof(t2)==Tuple!(int,string)));
foreach(i,v;t1) writeln(i,: ,v);
}

I.e. this solution is general enough to fix the unhygienic behaviour 
of Phobos tuple slicing.


Re: Tail pad optimization, cache friendlyness and C++ interrop

2014-06-15 Thread Timon Gehr via Digitalmars-d

On 06/16/2014 01:06 AM, Walter Bright wrote:

On 6/15/2014 3:45 PM, Timon Gehr wrote:

I don't know why the documentation says that. D's @safe is about memory
safety, not undefined behavior.
...


Note that this is frustratingly unhelpful for deciphering your point
about
memory safe = verifiably @safe by definition. Are you defining
memory
safe or verifiably @safe?


I don't understand your question. I don't know what is unhelpful about
saying that @safe refers to memory safety.
...


You stated the two to be equivalent earlier, which is impossible.




Note that the list of eschewed behaviors are possibly memory corrupting.


It is an incomplete list.


I ask that you enumerate the missing items, put the list in bugzilla,
and tag them with the 'safe' keyword.



In any case, I
don't see how any list of (syntactic) verifiable properties of the
code is
supposed to be equivalent to the (non-trivial semantic) memory safety
property.


The list is not restricted to syntactic issues.
...


(Yes it is, but that is not important because here the problem here is 
clearly that these terms have wildly different meanings in different 
communities.)


The important distinction is between verifiable and non-verifiable. 
@safe cannot be equivalent to memory safe because @safe is verifiable 
and memory safe is not.





Are you assuming @trusted as an oracle for memory safety and saying
@trusted
code is 'verifiably @safe' code? (That's not the intended reading.)


Not at all. Where does the spec suggest that?
...


I'm just trying to find the definition/theorem we do not agree on.




Signed integer overflow, for example, is not listed.

Are you trying to say that signed integer overflow is undefined
behaviour in D?
(This would again contradict the documentation.)


I know the spec says it follows 2's complement arithmetic. I'm not 100%
confident we can rely on that for all 2's complement CPUs, and
furthermore we have a bit of a problem in relying on optimizers built
for C/C++ which rely on it being undefined behavior.

In any case, it is still not an issue for @safe.


Maybe not in practice (though I'll not bet on it), but a conforming 
implementation can do _anything at all_ if undefined behaviour occurs, 
including behaving as if memory had been corrupted.


Re: A Perspective on D from game industry

2014-06-16 Thread Timon Gehr via Digitalmars-d

On 06/16/2014 03:12 AM, H. S. Teoh via Digitalmars-d wrote:


This insight therefore causes D's templates to mesh very nicely with
CTFE to form a beautifully-integrated whole.


I wouldn't go exactly that far. For one thing, CTFE cannot be used to 
manipulate types.


Re: Tail pad optimization, cache friendlyness and C++ interrop

2014-06-16 Thread Timon Gehr via Digitalmars-d

On 06/16/2014 06:49 AM, Walter Bright wrote:

On 6/15/2014 4:26 PM, Timon Gehr wrote:

On 06/16/2014 01:06 AM, Walter Bright wrote:

I don't understand your question. I don't know what is unhelpful about
saying that @safe refers to memory safety.
...


You stated the two to be equivalent earlier, which is impossible.


It is for Java,


No. sun.misc.Unsafe can be used to write memory safe code.


why should D be different?



The list is not restricted to syntactic issues.

(Yes it is,


No, it is not. For example, assigning an int to a pointer is a semantic
issue, not a [syntactic] one.
...


By extension, memory safety is a semantic issue. Since it is 
non-trivial, it is in general not verifiable. (This is Rice's Theorem.)


@safe-ty OTOH is a verifiable property of the program.


...


I'm just trying to find the definition/theorem we do not agree on.


I strongly suggest that you can help by identifying specific issues in
bugzilla


My point was that no implementation of @safe whatsoever can make it 
_equivalent_ to memory safety (i.e. @safe will never single out 
precisely those programs that do not corrupt memory). It will always 
only approximate memory safety. This is not a bug, it's just a fact.



and marking them with the 'safe' keyword, as I suggested
earlier.


In my book, the main problem is that @safe is not specified in a way 
that is easy to be verified to guarantee memory safety. I think plugging 
holes one after another as they are discovered in real code  until 
_hopefully_ none remain is not a very productive use of time. We should 
strive to keep @safe sound during its whole implementation.



I do not believe that memory safety is a difficult concept to
agree on.



Why? Many superficially simple informal concepts are difficult to agree 
on. In this case this is witnessed by the fact that you do not seem to 
want to ban undefined behaviour from @safe code which I can't agree with.


Re: A Perspective on D from game industry

2014-06-16 Thread Timon Gehr via Digitalmars-d
On 06/16/2014 05:18 PM, Ola Fosheim Grøstad 
ola.fosheim.grostad+dl...@gmail.com wrote:

On Monday, 16 June 2014 at 15:07:08 UTC, H. S. Teoh via Digitalmars-d
wrote:

Having said that, though, proper use of string mixins with CTFE and
templates ('scuse me, *compile-time arguments* ;)) can be extremely
powerful, and one of the things that make D metaprogramming so awesome.


Sure, just like m4 and cpp can be extremely powerful. Too powerful…



I wouldn't go as far as comparing mixins to a text macro preprocessor 
either. At least they are integrated into the language.


Re: Not initialized out argument error

2014-06-17 Thread Timon Gehr via Digitalmars-d

On 06/17/2014 02:02 AM, Walter Bright wrote:

On 6/16/2014 3:51 PM, bearophile wrote:

test.d(1,21): Error: uninitialised out argument of 'test3.foo' function


But it is not uninitialized. All out parameters are default initialized
to their .init value.


struct S{ @disable this(); }

void foo(out S s){} // ?


Re: A Perspective on D from game industry

2014-06-17 Thread Timon Gehr via Digitalmars-d

On 06/17/2014 04:00 PM, John Colvin wrote:


I though the primary use of static foreach was to force the
compiler to attempt compile-time iteration even for
non-TemplateArgList arguments like arrays known at compile-time

e.g.

static foreach(el; [1,2,3,4])
{
  pragma(msg, el);
}

or

static foreach(el; 5 .. 8)
{
  pragma(msg, el);
}


No, that's a distinct use and IMO shouldn't be called static foreach (it 
would be inconsistent with static if in how scopes are handled.)


In any case, it is a quite boring use case as well, one can write a 
template that converts a range into such a list and then plain foreach 
will work.


Re: A Perspective on D from game industry

2014-06-17 Thread Timon Gehr via Digitalmars-d

On 06/17/2014 03:36 PM, John Colvin wrote:




also, foreach that works outside of function scope would be awesome:

mixin template A(TL ...)
{
 foreach(i, T; TL)
 {
 mixin(T v ~ i.to!string);
 }
}


Also, identifier mixins might then somewhat clean up a lot of code. The 
cases where a declaration name needs to be generated and this forces the 
whole declaration to be written in awkward string interpolation style 
are just too common, even more so if static foreach is supported (if 
there is any named declaration inside the static foreach body at all and 
the loop loops more than once, mixins will be required to prevent name 
clashes.)


Eg:

mixin template A(T...){
static foreach(i,S;T){
S mixin(`v`~i.to!string);
auto mixin(`fun`~i.to!string)(S s){
// lots of code potentially using `i' without first
// converting it to a string only for it to be parsed back.
// ...
return s.mixin(`member`~i); // I've wanted this too
}
}
}

Also, there may be cases where one really wants to have local 
declarations (eg. enums) inside the static foreach loop.


(I really need to get around to polishing/make formal that static 
foreach DIP!)


Re: A Perspective on D from game industry

2014-06-17 Thread Timon Gehr via Digitalmars-d
On 06/17/2014 01:16 PM, Ola Fosheim Grøstad 
ola.fosheim.grostad+dl...@gmail.com wrote:

On Tuesday, 17 June 2014 at 09:17:21 UTC, Nick Sabalausky wrote:

I think you're hitting on the fundamental limitations of automated
code-updating tools here: They can't be treated as trusted black-boxes.


I don't think this is a fundamental limitation of tools, but a
consequence of language design.

I also think that features that makes it difficult to write programs
that analyze the semantics also makes it difficult for humans to
understand the code and verify the correctness of the code.

Programming languages are in general still quite primitive (not specific
to D), they still rely on convention rather than formalisms.
...


That's a very odd statement to make about programming languages in general.


Re: A Perspective on D from game industry

2014-06-17 Thread Timon Gehr via Digitalmars-d
On 06/17/2014 06:53 PM, Ola Fosheim Grøstad 
ola.fosheim.grostad+dl...@gmail.com wrote:

On Tuesday, 17 June 2014 at 16:34:23 UTC, Timon Gehr wrote:

On 06/17/2014 01:16 PM, Ola Fosheim Grøstad
ola.fosheim.grostad+dl...@gmail.com wrote:



Programming languages are in general still quite primitive (not specific
to D), they still rely on convention rather than formalisms.
...


That's a very odd statement to make about programming languages in
general.


This is in the context of imperative languages that are used for writing
the majority of deployed applications. Do you disagree?


If you are only talking about those languages, not at all.


Re: Tail pad optimization, cache friendlyness and C++ interrop

2014-06-17 Thread Timon Gehr via Digitalmars-d
On 06/17/2014 11:50 PM, Ola Fosheim Grøstad 
ola.fosheim.grostad+dl...@gmail.com wrote:

...
Btw, Rice's theorem is based on the halting problem for TMs… so it
suffers from the same issues as everything else in theoretical CS when
it comes to practical situations.


There is no such 'issue', or any meaningful way to define a 'practical' 
situation, especially such that the definition would apply to the 
current context. Theoretical computer scientists are saying what they 
are saying and they know what they are saying.



Whether generated IR contains unsafe
instructions is trivially decidable. Since you can define an IR in a way
that discriminate between unsafe/safe instructions you can also decide
that the safe subset is verifiable memory safe.


As you know this will not single out _exactly_ the subset of programs 
which is memory safe which is the claim I was arguing against, and 
invoking Rice's theorem to this end is perfectly fine and does not 
suffer from any 'issues'. The kind of thing you discuss in this 
paragraph, which you appear to consider 'practical' is also studied in 
theoretical CS, so what's your point?


Re: RFC: Value range propagation for if-else

2014-06-19 Thread Timon Gehr via Digitalmars-d

On 06/18/2014 09:54 PM, Meta wrote:

...

This could be a bad thing. It makes it pretty enticing to use contracts
as input verification instead of logic verification.


The following is doable as well with a standard range analysis:

byte foo(immutable int x){
if(xbyte.min || xbyte.max)
throw new InvalidArgumentException(...);
return x; // ok
}


Re: Adding the ?. null verification

2014-06-19 Thread Timon Gehr via Digitalmars-d

On 06/18/2014 09:36 PM, H. S. Teoh via Digitalmars-d wrote:

Here's a first stab at a library solution:

/**
 * Simple-minded implementation of a Maybe monad.
 *


Nitpick: Please do not call it a 'Maybe monad'.
It is not a monad: It's neither a functor not does it have a μ operator. 
(This could be fixed though.) Furthermore, opDispatch does not behave 
analogously to a (restricted) monadic bind operator:


class C{ auto foo=maybe(C.init); }

void main(){
import std.stdio;
C c=new C;
writeln(maybe(c).foo); // Maybe(Maybe(null))
}

The result should be Maybe(null), if the data type was to remotely 
resemble a monad.


Furthermore, 'Maybe' is a more natural name for a type constructor that 
adds an additional element to another type, and 'Maybe monad' in 
particular is a term that already refers to this different meaning even 
more strongly in other communities.




Re: Adding the ?. null verification

2014-06-19 Thread Timon Gehr via Digitalmars-d

On 06/19/2014 06:58 PM, Yota wrote:


Won't opDispatch here destroy any hope for statement completion in the
future?  I feel like D already has little hope for such tooling
features, but this puts the final nail in the coffin.


auto opDispatch(string field)()
if(is(typeof(__traits(getMember, t, field // -- not a nail


  1   2   3   4   5   6   7   8   9   10   >