Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread Kiith-Sa via Digitalmars-d

On Tuesday, 30 September 2014 at 21:19:44 UTC, Ethan wrote:
On Tuesday, 30 September 2014 at 08:48:19 UTC, Szymon Gatner 
wrote:
Considered how many games (and I don't mean indie anymore, but 
for example Blizzard's Heartstone) are now created in Unity 
which uses not only GC but runs in Mono I am very skeptical of 
anybody claiming GC is a no-go for games. - Especially- that 
native executable is being built in case of D.


I realize AAA's have have their reasons against GC i but in 
that case one should probably just get UE4 license anyway.


Hello. AAA developer (Remedy) here using D. Custom tech, with a 
custom binding solution written originally by Manu and 
continued by myself.


A GC itself is not a bad thing. The implementation, however, is.

With a codebase like ours (mostly C++, some D), there's a few 
things we need. Deterministic garbage collection is a big one - 
when our C++ object is being destroyed, we need the D object to 
be destroyed at the same time in most cases. This can be 
handled by calling GC.collect() often, but that's where the 
next thing comes in - the time the GC needs. If the time isn't 
being scheduled at object destruction, then it all gets lumped 
together in the GC collect. It automatically moves the time 
cost to a place where we may not want it.


ARC garbage collection would certainly be beneficial there. I 
looked in to adding support at a language level and at a 
library level for it, but the time it would have taken for me 
to learn both of those well enough to not muck it up is not 
feasible. Writing a garbage collector that we have greater 
control over will also take up too much time. The simpler 
solution is to enforce coding standards that avoid triggering 
the GC.


It's something I will look at again in the future, to be sure. 
And also to be sure, nothing is being done in Unity to the 
scale we do stuff in our engine (at least, nothing in Unity 
that also doesn't use a ton of native code to bypass Unity's 
limitations).


GC.free() can be used to manually delete GC-allocated data. 
(destroy() must be called first to call te destructor, though) - 
delete does both but is deprecated. You could write a simple RAII 
pointer wrapper if you don't want to always call 
destroy()+GC.free() manually.


Or do you need something else?


How to build phobos docs for dlang.org

2014-09-30 Thread Mark Isaacson via Digitalmars-d
I am in the process of working on some documentation improvements 
for Phobos. I am running into an issue while testing. Namely, I 
do not know how to build the ddocs for Phobos in quite the way 
that dlang.org does.


I can build them with: posix.mak -f DMD=TheRightOne html

But everything is poorly formatted, and more importantly, there's 
some wizardry going on to make std.container look like one file 
on dlang.org and I therefore cannot find out how to preview my 
changes to the several files that actually compose that package. 
In other words, if I go to the page for std_string.html, it works 
perfectly, but if I try go to std_container.html, it does not 
exist because there is no container.d file.


If I build dlang.org separately, I cannot follow the library 
reference link. The makefile for dlang.org includes rules for 
phobos-release and phobos-prerelease, but as far as I can tell, 
this does not generate the content I need (or I am not able to 
easily find it). If I copy the fully-built phobos html build into 
dlang.org/web/phobos then I can see the pages with the familiar 
dlang.org color scheme and layouts, but std_container.html still 
does not exist, and that is my fundamental problem.


This should really be documented somewhere. If nowhere else, this 
file seems appropriate: 
https://github.com/D-Programming-Language/dlang.org/blob/master/CONTRIBUTING.md


I hereby volunteer to document whatever answer I am given.


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Nordlöw
On Monday, 29 September 2014 at 10:49:53 UTC, Andrei Alexandrescu 
wrote:
Back when I've first introduced RCString I hinted that we have 
a larger strategy in mind. Here it is.


Slightly related :)

https://github.com/D-Programming-Language/phobos/pull/2573


Re: GC.sizeOf(array.ptr)

2014-09-30 Thread ketmar via Digitalmars-d
On Tue, 30 Sep 2014 17:43:04 +
Dicebot via Digitalmars-d  wrote:

> On a related note : CDGC D2 port passes druntime test suite 
> starting with today (with only shared library tests disabled), so 
> initial upstream PR should happen very soon (just need to clean 
> up all trace and "omg hax" commits :))
wow! no, really, what else can i say? ;-)


signature.asc
Description: PGP signature


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Manu via Digitalmars-d
On 29 September 2014 20:49, Andrei Alexandrescu via Digitalmars-d
 wrote:
> [...]
>
> Destroy!
>
> Andrei

I generally like the idea, but my immediate concern is that it implies
that every function that may deal with allocation is a template.
This interferes with C/C++ compatibility in a pretty big way. Or more
generally, the idea of a lib. Does this mean that a lib will be
required to produce code for every permutation of functions according
to memory management strategy? Usually libs don't contain code for
uninstantiated templates.

With this in place, I worry that traditional use of libs, separate
compilation, external language linkage, etc, all become very
problematic.
Pervasive templates can only work well if all code is D code, and if
all code is compiled together.
Most non-OSS industry doesn't ship source, they ship libs. And if libs
are to become impractical, then dependencies become a problem; instead
of linking libphobos.so, you pretty much have to compile phobos
together with your app (already basically true for phobos, but it's
fairly unique).
What if that were a much larger library? What if you have 10s of
dependencies all distributed in this manner? Does it scale?

I guess this doesn't matter if this is only a proposal for phobos...
but I suspect the pattern will become pervasive if it works, and yeah,
I'm not sure where that leads.


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread deadalnix via Digitalmars-d

On Wednesday, 1 October 2014 at 01:26:45 UTC, Manu via
Digitalmars-d wrote:
On 30 September 2014 08:04, Andrei Alexandrescu via 
Digitalmars-d

 wrote:

On 9/29/14, 10:16 AM, Paulo Pinto wrote:


Personally, I would go just for (b) with compiler support for
increment/decrement removal, as I think it will be too 
complex having to

support everything and this will complicate all libraries.



Compiler already knows (after inlining) that ++i and --i 
cancel each other,

so we should be in good shape there. -- Andrei


The compiler doesn't know that MyLibrary_AddRef(Thing *t); and
MyLibrary_DecRef(Thing *t); cancel eachother out though...
rc needs primitives that the compiler understands implicitly, 
so that

rc logic can be more complex than ++i/--i;


Even with simply i++ and i--, the information that they always go
by pair is lost on the compiler in many cases.


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Manu via Digitalmars-d
On 30 September 2014 08:04, Andrei Alexandrescu via Digitalmars-d
 wrote:
> On 9/29/14, 10:16 AM, Paulo Pinto wrote:
>>
>> Personally, I would go just for (b) with compiler support for
>> increment/decrement removal, as I think it will be too complex having to
>> support everything and this will complicate all libraries.
>
>
> Compiler already knows (after inlining) that ++i and --i cancel each other,
> so we should be in good shape there. -- Andrei

The compiler doesn't know that MyLibrary_AddRef(Thing *t); and
MyLibrary_DecRef(Thing *t); cancel eachother out though...
rc needs primitives that the compiler understands implicitly, so that
rc logic can be more complex than ++i/--i;


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread Cliff via Digitalmars-d

On Tuesday, 30 September 2014 at 21:19:44 UTC, Ethan wrote:


Hello. AAA developer (Remedy) here using D. Custom tech, with a 
custom binding solution written originally by Manu and 
continued by myself.


A GC itself is not a bad thing. The implementation, however, is.

With a codebase like ours (mostly C++, some D), there's a few 
things we need. Deterministic garbage collection is a big one - 
when our C++ object is being destroyed, we need the D object to 
be destroyed at the same time in most cases. This can be 
handled by calling GC.collect() often, but that's where the 
next thing comes in - the time the GC needs. If the time isn't 
being scheduled at object destruction, then it all gets lumped 
together in the GC collect. It automatically moves the time 
cost to a place where we may not want it.


Not a GC specialist here, so maybe the thought arises - why not
turn off automatic GC until such times in the code where you can
afford the cost of it, then call GC.collect explicitly -
essentially eliminating the opportunity for the GC to run at
random times and force running at deterministic times?  Is memory
usage so constrained that failing to execute runs in-between
those deterministic blocks could lead to OOM?  Does such a
strategy have other nasty side-effects which make it impractical?


Re: Before we implement SDL package format for DUB

2014-09-30 Thread Sean Kelly via Digitalmars-d

On Monday, 25 August 2014 at 16:40:10 UTC, Jonathan Marler wrote:

Hello everyone,

I've been working on SDL support for DUB and wanted to get some 
people's opinions on whether we should really use SDL.  I've 
posted my thoughts here: 
http://forum.rejectedsoftware.com/groups/rejectedsoftware.dub/thread/2263/


So by ASON, do you mean:
http://www.americanteeth.org/libason/ason_spec.pdf ?  Weirdly
enough, this was published the same day you say you created ASON,
and yet my take on your ASON vs. this ASON is that they're very
different.


Re: Local functions infer attributes?

2014-09-30 Thread deadalnix via Digitalmars-d

On Tuesday, 30 September 2014 at 08:33:27 UTC, Trass3r wrote:

On Sunday, 28 September 2014 at 02:56:57 UTC, deadalnix wrote:

Also, inferring everything is quite
expensive and we want D to compile fast.


Doesn't the compiler have to do that anyway?
I'd expect a proper compiler to check if my code is actually 
what

I claim it is. It's quite easy to mark something as e.g. nogc in
the first version and later on add code with allocations.


It is for the function body, but most qualifier are transitive,
so depends on the inference on other function.


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Mike via Digitalmars-d
On Tuesday, 30 September 2014 at 12:32:08 UTC, Ola Fosheim 
Grøstad wrote:
...basic building blocks such as intrinsics to build your own 
RC with compiler support sounds like a more interesting option.


I agree.


Re: Before we implement SDL package format for DUB

2014-09-30 Thread Sean Kelly via Digitalmars-d

On Tuesday, 30 September 2014 at 22:41:15 UTC, Sean Kelly wrote:
On Monday, 25 August 2014 at 16:40:10 UTC, Jonathan Marler 
wrote:

Hello everyone,

I've been working on SDL support for DUB and wanted to get 
some people's opinions on whether we should really use SDL.  
I've posted my thoughts here: 
http://forum.rejectedsoftware.com/groups/rejectedsoftware.dub/thread/2263/


So by ASON, do you mean:
http://www.americanteeth.org/libason/ason_spec.pdf ?  Weirdly
enough, this was published the same day you say you created 
ASON,

and yet my take on your ASON vs. this ASON is that they're very
different.


Oops, for some reason I thought this was a new thread...


Re: Local functions infer attributes?

2014-09-30 Thread Walter Bright via Digitalmars-d

On 9/30/2014 2:13 AM, ixid wrote:

It might be an effective argument to give bearophile some of the
problematic code and see what his idiomatic D version looks like and if
what you're after is elegantly achievable.


Or heck, ask the n.g. Lots of people here are very creative in their 
solutions to various D problems.


You've shown me code that is essentially "I want to do XYZ with ref" but 
it's still at a low level - step up a layer or two.


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread Sean Kelly via Digitalmars-d

On Tuesday, 30 September 2014 at 21:19:44 UTC, Ethan wrote:


With a codebase like ours (mostly C++, some D), there's a few 
things we need. Deterministic garbage collection is a big one - 
when our C++ object is being destroyed, we need the D object to 
be destroyed at the same time in most cases. This can be 
handled by calling GC.collect() often, but that's where the 
next thing comes in - the time the GC needs. If the time isn't 
being scheduled at object destruction, then it all gets lumped 
together in the GC collect. It automatically moves the time 
cost to a place where we may not want it.


Would delete on the D side work here?  Or the more current
destroy()?  ie. is release of the memory a crucial part of the
equation, or merely finalization?


Re: Before we implement SDL package format for DUB

2014-09-30 Thread Nick Sabalausky via Digitalmars-d

On 09/30/2014 08:15 AM, Bruno Medeiros wrote:


I don't like SDL much (because it's not that well-known, is whitespace
sensitive, and other reasons). But that's mostly a personal preference,
SDL is not a bad choice either.



FWIW, the only "whitespace" sensitivity in SDL is just the fact that 
it's newline-terminated (with optional line-continuation). (Well, and 
the auto-unindending of string literals that use line-continuation.) 
That's really all there is. Again, just FWIW.




Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread Ethan via Digitalmars-d
On Tuesday, 30 September 2014 at 08:48:19 UTC, Szymon Gatner 
wrote:
Considered how many games (and I don't mean indie anymore, but 
for example Blizzard's Heartstone) are now created in Unity 
which uses not only GC but runs in Mono I am very skeptical of 
anybody claiming GC is a no-go for games. - Especially- that 
native executable is being built in case of D.


I realize AAA's have have their reasons against GC i but in 
that case one should probably just get UE4 license anyway.


Hello. AAA developer (Remedy) here using D. Custom tech, with a 
custom binding solution written originally by Manu and continued 
by myself.


A GC itself is not a bad thing. The implementation, however, is.

With a codebase like ours (mostly C++, some D), there's a few 
things we need. Deterministic garbage collection is a big one - 
when our C++ object is being destroyed, we need the D object to 
be destroyed at the same time in most cases. This can be handled 
by calling GC.collect() often, but that's where the next thing 
comes in - the time the GC needs. If the time isn't being 
scheduled at object destruction, then it all gets lumped together 
in the GC collect. It automatically moves the time cost to a 
place where we may not want it.


ARC garbage collection would certainly be beneficial there. I 
looked in to adding support at a language level and at a library 
level for it, but the time it would have taken for me to learn 
both of those well enough to not muck it up is not feasible. 
Writing a garbage collector that we have greater control over 
will also take up too much time. The simpler solution is to 
enforce coding standards that avoid triggering the GC.


It's something I will look at again in the future, to be sure. 
And also to be sure, nothing is being done in Unity to the scale 
we do stuff in our engine (at least, nothing in Unity that also 
doesn't use a ton of native code to bypass Unity's limitations).


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Ola Fosheim Grostad via Digitalmars-d

On Tuesday, 30 September 2014 at 20:13:38 UTC, Paulo Pinto wrote:
Am 30.09.2014 14:55, schrieb "Ola Fosheim Grøstad" 
":
On Tuesday, 30 September 2014 at 12:51:25 UTC, Paulo  Pinto 
wrote:


It works when two big ifs come together.

- inside the same scope (e.g. function level)

- when the referece is not shared between threads.

While it is of limited applicability, Objective-C (and 
eventually

Swift) codebases prove it helps in most real life use cases.


But Objective-C has thread safe ref-counting?!

If it isn't thread safe it is of very limited utility, you can 
usually

get away with unique_ptr in single threaded scenarios.


Did you read my second bullet?


Yes? I dont want builtin rc default for single threaded use 
cases. I do want it when references are shared between threads, 
e.g. for cache objects.




Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Paulo Pinto via Digitalmars-d
Am 30.09.2014 14:55, schrieb "Ola Fosheim Grøstad" 
":

On Tuesday, 30 September 2014 at 12:51:25 UTC, Paulo  Pinto wrote:


It works when two big ifs come together.

- inside the same scope (e.g. function level)

- when the referece is not shared between threads.

While it is of limited applicability, Objective-C (and eventually
Swift) codebases prove it helps in most real life use cases.


But Objective-C has thread safe ref-counting?!

If it isn't thread safe it is of very limited utility, you can usually
get away with unique_ptr in single threaded scenarios.


Did you read my second bullet?


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread Paulo Pinto via Digitalmars-d

Am 30.09.2014 16:28, schrieb Szymon Gatner:

On Tuesday, 30 September 2014 at 14:19:51 UTC, Araq wrote:

It doesn't mention anything about moving C++ into C#.
Even with IL2CPP, C# has fundamental design trade offs that make it
slower than C++(GC is just one of them), so it wouldn't make much
sense to port engine code to C# unless they wanted it to run slower.


What are these fundamental design trade offs?


Guys I beg you, is there any chance I will get my answers? ;)



Sorry got carried away. Ola and others know better.


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread Paulo Pinto via Digitalmars-d

Am 30.09.2014 17:32, schrieb po:

On Tuesday, 30 September 2014 at 14:19:51 UTC, Araq wrote:

It doesn't mention anything about moving C++ into C#.
Even with IL2CPP, C# has fundamental design trade offs that make it
slower than C++(GC is just one of them), so it wouldn't make much
sense to port engine code to C# unless they wanted it to run slower.


What are these fundamental design trade offs?


-GC
-no meta programming vs meta programming


C# has meta programming capabilities via attributes, MSIL manipulation, 
reflection and having the compiler available as a library.


It is not as powerful or clean as D, but it gets the job done in many cases.


-defaults to ref semantics C# vs value semantics C++


Yes, it is a bummer, but they do exist to a certain extent. One just 
needs to make use of them.




-C++ supports unique types, C# does not


Lost me there.


-C# lambda automatic/GC, C++ lambda fully customized via capture list


Which very few people understand properly and end up capturing the whole 
environment anyway.




...

  I don't really think this is the same situations. I don't think C# is
any higher level than C++. Having a GC does not make it automatically a
higher level language, nor does it make it more "productive".

  That said, I think it is much easier to be productive in C# if you are
starting from scratch, but with the proper setup & in depth knowledge of
C++, it is every bit as productive(especially in games, where things
like GC end up as more of a burden).



The problem is not everyone has good knowledge of C++.

I do use C++ on and off since 1993, and always advocated it vs C when it 
was considered slow(!). However I very seldom met fellow developers on 
projects with similar C++ knowledge, except for the time I spent at CERN.


I follow at the distance the game industry, having tried a few times in 
the past to be part of it. I was an IGDA member for a while, did attend 
developer meetups at the game development university in Düsseldorf and

buy regularly the game's development magazine of German studios.

Many middle size studios in Germany are now betting in C# 
(Unity/MonoGame) and Flash. Mostly for tooling and indie quality games, 
but sometimes all the way to production.


If D had already a GC that could rival with the CLR GC, it would be a 
great alternative for the said studios. Regardless what the decision 
regarding the D's answer to memory management will be.


Specially given that it can interoperate better with C and C++ than the 
marshalling required by the alternatives.


--
Paulo




Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread via Digitalmars-d
On Tuesday, 30 September 2014 at 19:47:32 UTC, Ola Fosheim 
Grostad wrote:
I dont think there will be a game friendly D version untill 
someone decides to cooperate on D--. Basically cutting features 
and redesign for fast precise GC that minimize cache load and 
that can run 60 times per sec without taking more than 10% of 
the cpu.


I think it is doable for a given execution and memory model. 
Add some constraints and performance will happen! :-)


There's probably not much feature-wise that stands in the way of 
a fast precise GC. Implicitly shared `immutable` is one example, 
but other than that, I would say it's mostly unimplemented bits 
and pieces (missing type information) and "wrong" decisions made 
when designing the standard library.


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread via Digitalmars-d

On Tuesday, 30 September 2014 at 19:10:19 UTC, Marc Schütz wrote:
I'm convinced this isn't necessary. Let's take `setExtension()` 
as an example, standing in for any of a class of similar 
functions. This function allocates memory, returns it, and 
abandons it; it gives up ownership of the memory. The fact that 
the memory has been freshly allocated means that it is (head) 
unique, and therefore the caller (= library user) can take over 
the ownership. This, in turn, means that the caller can decide 
how she wants to manage it.


(I'll try to make a sketch on how this can be implemented in 
another post.)


Ok. What we need for it:

1) @unique, or a way to expressly specify uniqueness on a 
function's return type, as well as restrict function params by it 
(and preferably overloading on uniqueness). DMD already has this 
concept internally, it just needs to be formalized.


2) A few modifications to RefCounted to be constructable from 
unique values.


3) A wrapper type similar to std.typecons.Unique, that also 
supports moving. Let's called it Owned(T).


4) Borrowing.

setExtension() can then look like this:

Owned!string setExtension(in char[] path, in char[] ext);

To be used:

void saveFileAs(in char[] name) {
import std.path: setExtension;
import std.file: write;
name.// scope const(char[])
setExtension("txt"). // Owned!string
write(data);
}

The Owned(T) value implicitly converts to `scope!this(T)` via 
alias this; it can therefore be conveniently passed to 
std.file.write() (which already takes the filename as `in`) 
without copying or moving. The value then is released 
automatically at the end of the statement, because it is only a 
temporary and is not assigned to a variable.


For transferring ownership:

RefCounted!string[] filenames;
// ...
filenames ~= name.setExtension("txt").release;

`Owned!T.release()` returns the payload as a unique value, and 
resets the payload to it's init value (in this case `null`). 
RefCounted's constructor then accepts this unique value and takes 
ownership of it. When the Owned value's destructor is called, it 
finds the payload to be null and doesn't free the memory. 
Inlining and subsequent optimization can turn the destructor into 
a no-op in this case.


Optionally, Owned!T can provide an `alias this` to its release 
method; in this case, the method doesn't need to be called 
explicitly. It is however debatable whether being explicit with 
moving isn't the better choice.


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread Ola Fosheim Grostad via Digitalmars-d
On Tuesday, 30 September 2014 at 17:11:48 UTC, Szymon Gatner 
wrote:
I use both std/boost and exceptions when makes sense - game is 
not just rendering and number crunching after all.


I used some parts of boost c++0x std on ios a few years ago. I 
guess it is no longer maintained, but maybe it is possible to use 
the useful parts of a generic std library and match the memory 
layout on the D side?


calling writeln() from D side). Win32 support is coming but I 
expect similar problems (is nobody really mixing C++ and D 
using VC++ atm?).


Dunno.

That being said my biggest fear is that D2 will never be 
finished... I am lurking on those forums for 2 years now, 
waiting for the signal to start the transition but I need to be 
sure that in few months everything I need and the code I write 
will work as expected (and on iOS too). I am not seeing this


Well, I looked at D1 eight years ago with the intent of using it 
for game/world content on the server side. It was kind of nice, 
but the compiler was basic. Then I decided to drop D1 and wait 
for D2 when it was announced, and been tracking it ever since... 
So yeah, impatient. I dont think there will be a game friendly D 
version untill someone decides to cooperate on D--. Basically 
cutting features and redesign for fast precise GC that minimize 
cache load and that can run 60 times per sec without taking more 
than 10% of the cpu.


I think it is doable for a given execution and memory model. Add 
some constraints and performance will happen! :-)


strings...). It looks like Phobos might need to be rewritten 
entirely and soon. I will not give up tho, if I must skip D for 
one more project (which lasts year or two) then be it, 
hopefully I will be able to use if for the nest one :(


If all the people who want to use it for game content ( not 
engine, but content ) cooperated and created c++ compatible 
datatypes then maybe we could have something going within 6-12 
months?






Re: GC.sizeOf(array.ptr)

2014-09-30 Thread Sean Kelly via Digitalmars-d

On Tuesday, 30 September 2014 at 19:19:18 UTC, Steven
Schveighoffer wrote:


Hm... looked at the code, I have no idea how the GC would 
handle user-defined stuff. It seems to only deal with bits it 
knows about (i.e. APPENDABLE is specifically handled in gc/gc.d)


It wouldn't.  The GC was just going to provide storage for some
number of user-defined bits.


I think I just took the next bit available, if there was a 
warning about GC internal bits when I added APPENDABLE, I 
either missed it or dismissed it.


With BlkAttr defined independently in a bunch of different places
at the time, the comment would have been easy to miss.  I don't
know that it really matters anyway, but it's something I noticed
today.


Re: GC.sizeOf(array.ptr)

2014-09-30 Thread Steven Schveighoffer via Digitalmars-d

On 9/30/14 2:19 PM, Sean Kelly wrote:

On Tuesday, 30 September 2014 at 17:51:18 UTC, Steven
Schveighoffer wrote:

On 9/30/14 1:23 PM, Sean Kelly wrote:


(except for the
definition of the APPENDABLE BlkAttr, which really should be
defined externally and within the user-reserved range of the
bitfield instead of the GC-reserved range, but I digress...)


The APPENDABLE bit was added as a solution to avoid having to reserve
that memory for all allocations. Basically, if you try to append to a
block that doesn't have the bit, it simply reallocates conservatively.

So it does have to be part of the GC metadata, because the most
important effect is on blocks that AREN'T allocated via the array
runtime. Otherwise, the append mechanism creeps into all aspects of
memory allocation.


Yeah I know.  But when I defined the BlkAttr bitfield I'd
reserved one portion of the range for internal GC stuff and
another portion for user-defined stuff.  APPENDABLE should
probably have landed in the user-defined portion.  I don't see
any of those comments in the current code or I'd point to them.
I guess they were deleted at some point.


Hm... looked at the code, I have no idea how the GC would handle 
user-defined stuff. It seems to only deal with bits it knows about (i.e. 
APPENDABLE is specifically handled in gc/gc.d)


I think I just took the next bit available, if there was a warning about 
GC internal bits when I added APPENDABLE, I either missed it or 
dismissed it.


-Steve


Re: Program logic bugs vs input/environmental errors

2014-09-30 Thread Jeremy Powers via Digitalmars-d
On Tue, Sep 30, 2014 at 5:43 AM, Steven Schveighoffer via Digitalmars-d <
digitalmars-d@puremagic.com> wrote:

> ...
> Well, the failure comes from the effort to effect a certain behavior.
>
> Sun was looking to make programmers more diligent about handling errors.
> However, humans are lazy worthless creatures. What ends up happening is,
> the compiler complains they aren't handling an exception. They can't see
> any reason why the exception would occur, so they simply catch and ignore
> it to shut the compiler up.
>
> In 90% of cases, they are right -- the exception will not occur. But
> because they have been "trained" to simply discard exceptions, it ends up
> defeating the purpose for the 10% of the time that they are wrong.
>
>
That's the argument, but it doesn't seem valid to me.  Without checked
exceptions, you will always be ignoring exceptions.  With checked
exceptions, you have to explicitly ignore (some) exceptions, and when you
do it is immediately obvious in the code.  You go from everyone ignoring
exceptions all the time, to some people ignoring them - and being able to
easily notice and call out such.

Anyone 'trained' to ignore checked exceptions are simply shooting
themselves in the foot - same as if there were no checked exceptions, but
with more verbosity.  This is not a failure of checked exceptions, but a
failure of people to use a language feature properly. (Which, yeah, meta is
a failure of the feature... not going to go there)



> If you have been able to resist that temptation and handle every
> exception, then I think you are in the minority. But I have no evidence to
> back this up, it's just a belief.
>
>
In my world of professional java, ignoring exceptions is an immediate,
obvious indicator of bad code.  You will be called on it, and chastised
appropriately.  So from my standpoint, Sun was successful in making
programmers more diligent about handling errors.



>  Note I am not advocating adding checked exceptions to D (though I would
>> like it).  Point is to acknowledge that there are different kinds of
>> exceptions, and an exception for one part of the code may not be a
>> problem for the bit that invokes it.
>>
>>
> I think this is appropriate for a lint tool for those out there like
> yourself who want that information. But requiring checked exceptions is I
> think a futile attempt to outlaw natural human behavior.
>
>
Perhaps I shouldn't have mentioned checked exceptions at all, seem to be
distracting from what I wanted to say.  The important bit I wanted to bring
to the discussion is that not all exceptions are the same, and different
sections of code have their own ideas of what is a breaking problem.  A
module/library/component/whatever treats any input into itself as its
input, and thus appropriately throws exceptions on bad input.  But code
using that whatever may be perfectly fine handling exceptions coming from
there.

Exceptions need to be appropriate to the given abstraction, and dealt with
by the user of that abstraction.


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread via Digitalmars-d

Ok, here are my few cents:

On Monday, 29 September 2014 at 10:49:53 UTC, Andrei Alexandrescu 
wrote:
Back when I've first introduced RCString I hinted that we have 
a larger strategy in mind. Here it is.


The basic tenet of the approach is to reckon and act on the 
fact that memory allocation (the subject of allocators) is an 
entirely distinct topic from memory management, and more 
generally resource management. This clarifies that it would be 
wrong to approach alternatives to GC in Phobos by means of 
allocators. GC is not only an approach to memory allocation, 
but also an approach to memory management. Reducing it to 
either one is a mistake. In hindsight this looks rather obvious 
but it has caused me and many people better than myself a lot 
of headache.


I would argue that GC is at its core _only_ a memory management 
strategy. It just so happens that the one in D's runtime also 
comes with an allocator, with which it is tightly integrated. In 
theory, a GC can work with any (and multiple) allocators, and you 
could of course also call GC.free() manually, because, as you 
say, management and allocation are entirely distinct topics.




That said allocators are nice to have and use, and I will 
definitely follow up with std.allocator. However, std.allocator 
is not the key to a @nogc Phobos.


Agreed.



Nor are ranges. There is an attitude that either output ranges, 
or input ranges in conjunction with lazy computation, would 
solve the issue of creating garbage. 
https://github.com/D-Programming-Language/phobos/pull/2423 is a 
good illustration of the latter approach: a range would be 
lazily created by chaining stuff together. A range-based 
approach would take us further than the allocators, but I see 
the following issues with it:


(a) the whole approach doesn't stand scrutiny for non-linear 
outputs, e.g. outputting some sort of associative array or 
really any composite type quickly becomes tenuous either with 
an output range (eager) or with exposing an input range (lazy);


(b) makes the style of programming without GC radically 
different, and much more cumbersome, than programming with GC; 
as a consequence, programmers who consider changing one 
approach to another, or implementing an algorithm neutral to 
it, are looking at a major rewrite;


(c) would make D/@nogc a poor cousin of C++. This is quite out 
of character; technically, I have long gotten used to seeing 
most elaborate C++ code like poor emulation of simple D idioms. 
But C++ has spent years and decades taking to perfection an 
approach without a tracing garbage collector. A departure from 
that would need to be superior, and that doesn't seem to be the 
case with range-based approaches.


I agree with this, too.



===

Now that we clarified that these existing attempts are not 
going to work well, the question remains what does. For Phobos 
I'm thinking of defining and using three policies:


enum MemoryManagementPolicy { gc, rc, mrc }
immutable
gc = ResourceManagementPolicy.gc,
rc = ResourceManagementPolicy.rc,
mrc = ResourceManagementPolicy.mrc;

The three policies are:

(a) gc is the classic garbage-collected style of management;

(b) rc is a reference-counted style still backed by the GC, 
i.e. the GC will still be able to pick up cycles and other 
kinds of leaks.


(c) mrc is a reference-counted style backed by malloc.

(It should be possible to collapse rc and mrc together and make 
the distinction dynamically, at runtime. I'm distinguishing 
them statically here for expository purposes.)


The policy is a template parameter to functions in Phobos (and 
elsewhere), and informs the functions e.g. what types to 
return. Consider:


auto setExtension(MemoryManagementPolicy mmp = gc, R1, R2)(R1 
path, R2 ext)

if (...)
{
static if (mmp == gc) alias S = string;
else alias S = RCString;
S result;
...
return result;
}

On the caller side:

auto p1 = setExtension("hello", ".txt"); // fine, use gc
auto p2 = setExtension!gc("hello", ".txt"); // same
auto p3 = setExtension!rc("hello", ".txt"); // fine, use rc

So by default it's going to continue being business as usual, 
but certain functions will allow passing in a (defaulted) 
policy for memory management.


This, however, I disagree with strongly. For one thing - this has 
already been noted by others - it would make the functions' 
implementation extremely ugly (`static if` hell), it would make 
them harder to unit test, and from a user's point of view, it's 
very tedious and might interfere badly with UFCS.


But more importantly, IMO, it's the wrong thing to do. These 
functions shouldn't know anything about memory management policy 
at all. They allocate, which means they need to know about 
_allocation_ policy, but memory _management_ policy needs to be 
decided by the user.


Now, your suggestion in a way still leaves that decision to the 
user, but does so in a very intrusive way, by passing a template 
flag. This is clea

Re: GC.sizeOf(array.ptr)

2014-09-30 Thread Sean Kelly via Digitalmars-d

On Tuesday, 30 September 2014 at 17:51:18 UTC, Steven
Schveighoffer wrote:

On 9/30/14 1:23 PM, Sean Kelly wrote:


(except for the
definition of the APPENDABLE BlkAttr, which really should be
defined externally and within the user-reserved range of the
bitfield instead of the GC-reserved range, but I digress...)


The APPENDABLE bit was added as a solution to avoid having to 
reserve that memory for all allocations. Basically, if you try 
to append to a block that doesn't have the bit, it simply 
reallocates conservatively.


So it does have to be part of the GC metadata, because the most 
important effect is on blocks that AREN'T allocated via the 
array runtime. Otherwise, the append mechanism creeps into all 
aspects of memory allocation.


Yeah I know.  But when I defined the BlkAttr bitfield I'd
reserved one portion of the range for internal GC stuff and
another portion for user-defined stuff.  APPENDABLE should
probably have landed in the user-defined portion.  I don't see
any of those comments in the current code or I'd point to them.
I guess they were deleted at some point.


Re: Creeping Bloat in Phobos

2014-09-30 Thread Dmitry Olshansky via Digitalmars-d

29-Sep-2014 16:43, Dicebot пишет:

I refuse to accept any code gen complaints based on DMD. It's
optimization facilities are generally crappy compared to gdc / ldc and
not worth caring about - it is just a reference implementation after
all. Clean and concise library code is more important.

Now if the same inlining failure happens with other two compilers - that
is something worth talking about (I don't know if it happens)


+1

--
Dmitry Olshansky


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Dmitry Olshansky via Digitalmars-d

29-Sep-2014 14:49, Andrei Alexandrescu пишет:

auto setExtension(MemoryManagementPolicy mmp = gc, R1, R2)(R1 path, R2 ext)
if (...)
{
 static if (mmp == gc) alias S = string;
 else alias S = RCString;
 S result;
 ...
 return result;
}


Incredible code bloat? Boilerplate in each function for the win?
I'm at loss as to how it would make things better.


--
Dmitry Olshansky


Re: GC.sizeOf(array.ptr)

2014-09-30 Thread Steven Schveighoffer via Digitalmars-d

On 9/30/14 1:23 PM, Sean Kelly wrote:


(except for the
definition of the APPENDABLE BlkAttr, which really should be
defined externally and within the user-reserved range of the
bitfield instead of the GC-reserved range, but I digress...)


The APPENDABLE bit was added as a solution to avoid having to reserve 
that memory for all allocations. Basically, if you try to append to a 
block that doesn't have the bit, it simply reallocates conservatively.


So it does have to be part of the GC metadata, because the most 
important effect is on blocks that AREN'T allocated via the array 
runtime. Otherwise, the append mechanism creeps into all aspects of 
memory allocation.


-Steve


Re: GC.sizeOf(array.ptr)

2014-09-30 Thread Dicebot via Digitalmars-d

On Tuesday, 30 September 2014 at 17:28:02 UTC, Sean Kelly wrote:

On Tuesday, 30 September 2014 at 15:46:54 UTC, Steven
Schveighoffer wrote:


So I made the call to put it at the beginning of the block, 
which obviously doesn't change, and offset everything by 16 
bytes to maintain alignment.


It may very well be that we can put it at the end of the block 
instead, and you can probably do so without much effort in the 
runtime (everything uses CTFE functions to calculate padding 
and location of the capacity). It has been such a long time 
since I did that, I'm not very sure of all the reasons not to 
do it. A look through the mailing list archives might be 
useful.


Yes, a lot of this is an artifact of the relatively simplistic
manner that the current GC tracks memory.  If large blocks had a
header, for example, then this could theoretically live there 
and

not cause any problems.  As we move towards supporting precise
scanning, the GC will need to be aware of the types of data it
holds, and so some portion of the array appendability strategy
should probably migrate into the GC.  A redefinition of the GC
interface is many years overdue.  This just needs to be
considered when it happens.


I decided to add a similar workaround to CDGC for now and fix the 
way it is stored in druntime a bit later :)


On a related note : CDGC D2 port passes druntime test suite 
starting with today (with only shared library tests disabled), so 
initial upstream PR should happen very soon (just need to clean 
up all trace and "omg hax" commits :))


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread H. S. Teoh via Digitalmars-d
On Tue, Sep 30, 2014 at 04:10:43PM +, Sean Kelly via Digitalmars-d wrote:
> On Monday, 29 September 2014 at 10:49:53 UTC, Andrei Alexandrescu wrote:
> >
> >The policy is a template parameter to functions in Phobos (and
> >elsewhere), and informs the functions e.g. what types to return.
> >Consider:
> >
> >auto setExtension(MemoryManagementPolicy mmp = gc, R1, R2)(R1 path, R2
> >ext)
> >if (...)
> >{
> >static if (mmp == gc) alias S = string;
> >else alias S = RCString;
> >S result;
> >...
> >return result;
> >}
> 
> Is this for exposition purposes or actually how you expect it to work?
> Quite honestly, I can't imagine how I could write a template function
> in D that needs to work with this approach.
> 
> As much as I hate to say it, this is pretty much exactly what C++
> allocators were designed for.  They handle allocation, sure, but they
> also hold aliases for all relevant types for the data being allocated.
[...]
> So... while I support the goal you're aiming at, I want to see a much
> more comprehensive example of how this will work and how it will
> affect code written by D *users*.  Because it isn't enough for Phobos
> to be written this way.  Basically all D code will have to take this
> into account for the strategy to be truly viable.  Simply outlining
> one of the most basic functions in Phobos, which already looks like it
> will have a static conditional at the beginning and *need to be aware
> of the fact that an RCString type exists* makes me terrified of what a
> realistic example will look like.

Yeah, this echoes my concern. This looks not that much different, from a
user's POV, from C++ containers' allocator template parameters. Yes I
know we're not talking about *allocators* per se but about *memory
management*, but I'm talking about the need to explicitly pass mmp to
*every* *single* *function* if you desire anything but the default. How
many people actually *use* the allocator parameter in STL? Certainly,
many people do... but the code is anything but readable / maintainable.

Not only that, but every single function will have to handle this
parameter somehow, and if static if's at the top of the function is what
we're starting with, I fear seeing what we end up with.

Furthermore, in order for this to actually work, it has to be percolated
throughout the entire codebase -- any D library that even remotely uses
Phobos for anything will have to percolate this parameter throughout its
API -- at least, any part of the API that might potentially use a Phobos
function. Otherwise, you still have the situation where a given D
library doesn't allow the user to select a memory management scheme, and
internally calls Phobos functions with the default settings. So this
still doesn't solve the problem that today, people who need to use @nogc
can't use a lot of existing libraries because the library depends on the
GC, even if it doesn't assume anything about the MM scheme, but just
happens to call some obscure Phobos function with the default MM
parameter. The only way this could work was if *every* D library author
voluntarily rewrites a lot of code in order to percolate this MM
parameter through to the API, on the off-chance that some obscure user
somewhere might have need to use it. I don't see much likelihood of this
actually happening.

Then there's the matter of functions like parseJSON() that needs to
allocate nodes and return a tree (or whatever) of these nodes. Note that
they need to *allocate*, not just know what kind of memory management
model is to be used. So how do you propose to address this? Via another
parameter (compile-time or otherwise) to specify which allocator to use?
So how does the memory management parameter solve anything then? And how
would such a thing be implemented? Using a 3-way static-if branch in
every single point in parseJSON where it needs to allocate nodes? We
could just as well write it in C++, if that's the case.

This proposal has many glaring holes that need to be fixed before it can
be viable.


T

-- 
EMACS = Extremely Massive And Cumbersome System


Re: GC.sizeOf(array.ptr)

2014-09-30 Thread Sean Kelly via Digitalmars-d

On Tuesday, 30 September 2014 at 15:46:54 UTC, Steven
Schveighoffer wrote:


So I made the call to put it at the beginning of the block, 
which obviously doesn't change, and offset everything by 16 
bytes to maintain alignment.


It may very well be that we can put it at the end of the block 
instead, and you can probably do so without much effort in the 
runtime (everything uses CTFE functions to calculate padding 
and location of the capacity). It has been such a long time 
since I did that, I'm not very sure of all the reasons not to 
do it. A look through the mailing list archives might be useful.


Yes, a lot of this is an artifact of the relatively simplistic
manner that the current GC tracks memory.  If large blocks had a
header, for example, then this could theoretically live there and
not cause any problems.  As we move towards supporting precise
scanning, the GC will need to be aware of the types of data it
holds, and so some portion of the array appendability strategy
should probably migrate into the GC.  A redefinition of the GC
interface is many years overdue.  This just needs to be
considered when it happens.


Re: GC.sizeOf(array.ptr)

2014-09-30 Thread Sean Kelly via Digitalmars-d

On Tuesday, 30 September 2014 at 13:42:14 UTC, Dicebot wrote:


Is such behaviour intended?


Yes.  As far as the GC is concerned, asking for the size of an
interior pointer is asking for the size of a slice of the data,
and as slices are not extendable it will always return 0.  All of
the APPENDABLE stuff takes place outside the GC (except for the
definition of the APPENDABLE BlkAttr, which really should be
defined externally and within the user-reserved range of the
bitfield instead of the GC-reserved range, but I digress...) and
so it has no way of knowing that someone is using the blocks this
way.


Re: GC.sizeOf(array.ptr)

2014-09-30 Thread Steven Schveighoffer via Digitalmars-d

On 9/30/14 12:01 PM, Dicebot wrote:


I think it should be possible. That way actual block size will be simply
considered a bit smaller and extending happen before reserved space is
hit. But of course I have only a very vague knowledge of druntime
ackquired while porting cdgc so may need to think about it a bit more
and probably chat with Leandro too :)


I think it is possible, and perhaps more correct, to put the array 
append size into a separate memory block for larger sizes. The placement 
at the end works very well and has good properties for smaller sizes.


This is similar to how the flags work.

-Steve


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Sean Kelly via Digitalmars-d

On Tuesday, 30 September 2014 at 16:49:48 UTC, Johannes Pfau
wrote:


I guess my point is that although RC is useful in some cases 
output
ranges / sink delegates / pre-allocated buffers are still 
necessary in

other cases and RC is not the solution for _everything_.


Yes, I'm hoping this is an adjunct to changes in Phobos to reduce
the frequency of implicit allocation in general.  The less
garbage that's generated, the less GC vs. RC actually matters.


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread Szymon Gatner via Digitalmars-d
On Tuesday, 30 September 2014 at 14:34:49 UTC, Ola Fosheim 
Grøstad wrote:



Guys I beg you, is there any chance I will get my answers? ;)


Nope :)


I suspected so :P



I don't think anyone know what extended C++ actually will look 
like.


Great.



Some people say  D is going to have std::* support, but that 
would require someone to keep track of changes in all the c++ 
compilers D is supposed to support: Clang, G++, and VC++…


My thoughts too. Seems like maintenance hell.



Some people say they want full support for C++ exceptions, some 
say it is too difficult…


However, you don't need std::* or C++ exceptions for a game? 
Some aspects of "extended C++ support" is going to be either 
wishful thinking or non-portable, so you probably should try to 
avoid depending on it.  What are you missing?


I use both std/boost and exceptions when makes sense - game is 
not just rendering and number crunching after all.


Tbh what I -am missing- is proper run-time support for what is 
already suppose to work (building x64 C++/D app crashes when 
calling writeln() from D side). Win32 support is coming but I 
expect similar problems (is nobody really mixing C++ and D using 
VC++ atm?).


It would be great to be able to call non-virtual members of C++ 
classes from D but
I don't really need anything else from the language SPECS to 
start things going - my question is out of pure curiosity.


That being said my biggest fear is that D2 will never be 
finished... I am lurking on those forums for 2 years now, waiting 
for the signal to start the transition but I need to be sure that 
in few months everything I need and the code I write will work as 
expected (and on iOS too). I am not seeing this unfortunately, 
language is still being actively discussed on the most basic 
level (allocators, ARC, auto-decoding of utf strings...). It 
looks like Phobos might need to be rewritten entirely and soon. I 
will not give up tho, if I must skip D for one more project 
(which lasts year or two) then be it, hopefully I will be able to 
use if for the nest one :(


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Johannes Pfau via Digitalmars-d
Am Tue, 30 Sep 2014 05:29:55 -0700
schrieb Andrei Alexandrescu :

> 
> > Another thought: if we use a template parameter, what's the story
> > for virtual functions (e.g. Object.toString)? They can't be
> > templated.
> 
> Good point. We need to think about that.
> 

Passing buffers or sink delegates (like we already do for toString) is
possible for some functions. For toString it works fine. Then implement
to!RCString(object) using the toString(sink delegate) overload.

For all other functions RC is indeed difficult, probably only possible
with different manually written overloads (and a dummy parameter as we
can't overload on return type)?




Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Johannes Pfau via Digitalmars-d
Am Tue, 30 Sep 2014 05:23:29 -0700
schrieb Andrei Alexandrescu :

> On 9/30/14, 1:34 AM, Johannes Pfau wrote:
> > So you propose RC + global/thread local allocators as the solution
> > for all memory related problems as 'memory management is not
> > allocation'. And you claim that using output ranges / providing
> > buffers / allocators is not an option because it only works in some
> > special cases?
> 
> Correct. I assume you meant an irony/sarcasm somewhere :o).

The sarcasm is supposed to be here: '_all_ memory related problems' ;-)

I guess my point is that although RC is useful in some cases output
ranges / sink delegates / pre-allocated buffers are still necessary in
other cases and RC is not the solution for _everything_.

As Manu often pointed out sometimes you do not want any dynamic
allocation (toStringz in games is a good example) and here RC doesn't
help.

Another example is format which can already write to output ranges and
uses sink delegates internally. That's a much better abstraction than
simply returning a reference counted string (allocated with a thread
local allocator). Using sink delegates internally is also more
efficient than creating temporary RCStrings. And sometimes there's no
allocation at all this way (directly writing to a socket/file). 

> 
> > What if I don't want automated memory _management_? What if I want a
> > function to use a stack buffer? Or if I want to free manually?
> >
> > If I want std.string.toStringz to put the result into a temporary
> > stack buffer your solution doesn't help at all. Passing an ouput
> > range, allocator or buffer would all solve this.
> 
> Correct. The output of toStringz would be either a GC string or an RC 
> string.

But why not provide 3 overloads then?

toStringz(OutputRange)
string toStringz(Policy) //char*, actually
RCString toStringz(Policy)

The notion I got from some of your posts is that you're opposed to such
overloads, or did I misinterpret that? 


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread simendsjo via Digitalmars-d
On 09/30/2014 05:32 PM, po wrote:
> On Tuesday, 30 September 2014 at 14:19:51 UTC, Araq wrote:
>>> It doesn't mention anything about moving C++ into C#.
>>> Even with IL2CPP, C# has fundamental design trade offs that make it
>>> slower than C++(GC is just one of them), so it wouldn't make much
>>> sense to port engine code to C# unless they wanted it to run slower.
>>
>> What are these fundamental design trade offs?
> 
> -GC
> -no meta programming vs meta programming
(...)

C# has wonderful meta-programming facilities: Generics! (/me run and hides)


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Sean Kelly via Digitalmars-d
On Monday, 29 September 2014 at 10:49:53 UTC, Andrei Alexandrescu 
wrote:


The policy is a template parameter to functions in Phobos (and 
elsewhere), and informs the functions e.g. what types to 
return. Consider:


auto setExtension(MemoryManagementPolicy mmp = gc, R1, R2)(R1 
path, R2 ext)

if (...)
{
static if (mmp == gc) alias S = string;
else alias S = RCString;
S result;
...
return result;
}


Is this for exposition purposes or actually how you expect it to 
work?  Quite honestly, I can't imagine how I could write a 
template function in D that needs to work with this approach.


As much as I hate to say it, this is pretty much exactly what C++ 
allocators were designed for.  They handle allocation, sure, but 
they also hold aliases for all relevant types for the data being 
allocated.  If the MemoryManagementPolicy enum were replaced with 
an alias to a type that I could use to at least obtain relevant 
aliases, that would be something.  But even that approach 
dramatically complicates code that uses it.


Having written standards-compliant containers in C++, I honestly 
can't imagine the average user writing code that works this way.  
Once you assert that the reference type may be a pointer or it 
may be some complex proxy to data stored elsewhere, a lot of 
composability pretty much flies right out the window.


For example, I have an implementation of C++ 
unordered_map/set/etc designed to be a customizable cache, so one 
of its template arguments is a policy type that allows eviction 
behavior to be chosen at declaration time.  Maybe the cache is 
size-limited, maybe it's age-limited, maybe it's a combination of 
the two or something even more complicated.  The problem is that 
the container defines all the aliases relating to the underlying 
data, but the policy, which needs to be aware of these, is passed 
as a template argument to this container.


To make something that's fully aware of C++ allocators then, I'd 
have to define a small type that takes the container template 
arguments (the contained type and the allocator type) and 
generates the aliases and pass this to the policy, which in turn 
passes the type through to the underlying container so it can 
declare its public aliases and whatever else is true 
standards-compliant fashion (or let the container derive this 
itself, but then you run into the potential for disagreement).  
And while this is possible, doing so would complicate the 
creation of the cache policies to the point where it subverts 
their intent, which was to make it easy for the user to tune the 
behavior of the cache to their own particular needs by defining a 
simple type which implements a few functions.  Ultimately, I 
decided against this approach for the cache container and decided 
to restrict the allocators to those which defined a pointer to T 
as T* so the policies could be coded with basically no knowledge 
of the underlying storage.


So... while I support the goal you're aiming at, I want to see a 
much more comprehensive example of how this will work and how it 
will affect code written by D *users*.  Because it isn't enough 
for Phobos to be written this way.  Basically all D code will 
have to take this into account for the strategy to be truly 
viable.  Simply outlining one of the most basic functions in 
Phobos, which already looks like it will have a static 
conditional at the beginning and *need to be aware of the fact 
that an RCString type exists* makes me terrified of what a 
realistic example will look like.


Re: GC.sizeOf(array.ptr)

2014-09-30 Thread Dicebot via Digitalmars-d
On Tuesday, 30 September 2014 at 15:46:54 UTC, Steven 
Schveighoffer wrote:

On 9/30/14 10:24 AM, Dicebot wrote:
On Tuesday, 30 September 2014 at 14:01:17 UTC, Steven 
Schveighoffer wrote:
Assertion passes with D1/Tango runtime but fails with 
current D2
runtime. This happens because `result.ptr` is not actually a 
pointer
returned by gc_qalloc from array reallocation, but interior 
pointer 16
bytes from the start of that block. Druntime stores some 
metadata

(length/capacity I presume) in the very beginning.


This is accurate, it stores the "used" size of the array. But 
it's

only the case for arrays, not general GC.malloc blocks.

Alternative is to use result.capacity, which essentially 
looks up the
same thing (and should be more accurate). But it doesn't 
cover the

same inputs.


Why is it stored in the beginning and not in the end of the 
block (like
capacity)? I'd like to explore options of removing interior 
pointer
completely before proceeding with adding more special cases to 
GC

functions.


First, it is the capacity. It's just that the capacity lives at 
the beginning of larger blocks.


The reason is due to the ability to extend pages.

With smaller blocks (2048 bytes or less), the page is divided 
into equal portions, and those can NEVER be extended. Any 
attempt to extend results in a realloc into another block. 
Putting the capacity at the end makes sense for 2 reasons: 1. 1 
byte is already reserved to prevent cross-block pointers, 2. It 
doesn't cause alignment issues. We can't very well offset a 16 
byte block by 16 bytes. But importantly, the capacity field 
does not move.


However, for page and above size (4096+ bytes), the original 
(D1 and early D2) runtime would attempt to extend into the next 
page, without moving the data. Thus we save the copy of data 
into a new block, and just set some bits and we're done.


Ah that must be what confused me - I looked at small block offset 
calculation originally and blindly assumed same logic for other 
sizes. Sorry, my fault!


But this poses a problem for when the capacity field is stored 
at the end -- especially since we are caching the block info. 
The block info can change with a call to GC.extend (whereas a 
fixed-size block, the block info CANNOT change). Depending on 
what "version" of the block info you have, the "end" can be 
different, and you may end up corrupting data. This is 
especially important for shared or immutable array blocks, 
where multiple threads could be appending at the same time.


So I made the call to put it at the beginning of the block, 
which obviously doesn't change, and offset everything by 16 
bytes to maintain alignment.


It may very well be that we can put it at the end of the block 
instead, and you can probably do so without much effort in the 
runtime (everything uses CTFE functions to calculate padding 
and location of the capacity). It has been such a long time 
since I did that, I'm not very sure of all the reasons not to 
do it. A look through the mailing list archives might be useful.


I think it should be possible. That way actual block size will be 
simply considered a bit smaller and extending happen before 
reserved space is hit. But of course I have only a very vague 
knowledge of druntime ackquired while porting cdgc so may need to 
think about it a bit more and probably chat with Leandro too :)


Have created bugzilla issue for now : 
https://issues.dlang.org/show_bug.cgi?id=13558


Re: GC.sizeOf(array.ptr)

2014-09-30 Thread Steven Schveighoffer via Digitalmars-d

On 9/30/14 10:24 AM, Dicebot wrote:

On Tuesday, 30 September 2014 at 14:01:17 UTC, Steven Schveighoffer wrote:

Assertion passes with D1/Tango runtime but fails with current D2
runtime. This happens because `result.ptr` is not actually a pointer
returned by gc_qalloc from array reallocation, but interior pointer 16
bytes from the start of that block. Druntime stores some metadata
(length/capacity I presume) in the very beginning.


This is accurate, it stores the "used" size of the array. But it's
only the case for arrays, not general GC.malloc blocks.

Alternative is to use result.capacity, which essentially looks up the
same thing (and should be more accurate). But it doesn't cover the
same inputs.


Why is it stored in the beginning and not in the end of the block (like
capacity)? I'd like to explore options of removing interior pointer
completely before proceeding with adding more special cases to GC
functions.


First, it is the capacity. It's just that the capacity lives at the 
beginning of larger blocks.


The reason is due to the ability to extend pages.

With smaller blocks (2048 bytes or less), the page is divided into equal 
portions, and those can NEVER be extended. Any attempt to extend results 
in a realloc into another block. Putting the capacity at the end makes 
sense for 2 reasons: 1. 1 byte is already reserved to prevent 
cross-block pointers, 2. It doesn't cause alignment issues. We can't 
very well offset a 16 byte block by 16 bytes. But importantly, the 
capacity field does not move.


However, for page and above size (4096+ bytes), the original (D1 and 
early D2) runtime would attempt to extend into the next page, without 
moving the data. Thus we save the copy of data into a new block, and 
just set some bits and we're done.


But this poses a problem for when the capacity field is stored at the 
end -- especially since we are caching the block info. The block info 
can change with a call to GC.extend (whereas a fixed-size block, the 
block info CANNOT change). Depending on what "version" of the block info 
you have, the "end" can be different, and you may end up corrupting 
data. This is especially important for shared or immutable array blocks, 
where multiple threads could be appending at the same time.


So I made the call to put it at the beginning of the block, which 
obviously doesn't change, and offset everything by 16 bytes to maintain 
alignment.


It may very well be that we can put it at the end of the block instead, 
and you can probably do so without much effort in the runtime 
(everything uses CTFE functions to calculate padding and location of the 
capacity). It has been such a long time since I did that, I'm not very 
sure of all the reasons not to do it. A look through the mailing list 
archives might be useful.


-Steve


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Andrei Alexandrescu via Digitalmars-d

On 9/30/14, 7:13 AM, "Marc Schütz" " wrote:

On Tuesday, 30 September 2014 at 14:05:43 UTC, Foo wrote:

On Tuesday, 30 September 2014 at 13:59:23 UTC, Andrei
Alexandrescu wrote:

On 9/30/14, 6:38 AM, Foo wrote:
This won't work because the type of "string" is different for RC vs.
GC. -- Andrei


But it would work for phobos functions without template bloat.


Only for internal allocations. If the functions want to return
something, the type must known.


Ah, now I understand the point. Thanks. -- Andrei


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread po via Digitalmars-d

On Tuesday, 30 September 2014 at 14:19:51 UTC, Araq wrote:

It doesn't mention anything about moving C++ into C#.
Even with IL2CPP, C# has fundamental design trade offs that 
make it slower than C++(GC is just one of them), so it 
wouldn't make much sense to port engine code to C# unless they 
wanted it to run slower.


What are these fundamental design trade offs?


-GC
-no meta programming vs meta programming
-defaults to ref semantics C# vs value semantics C++
-C++ supports unique types, C# does not
-C# lambda automatic/GC, C++ lambda fully customized via capture 
list



Guys I beg you, is there any chance I will get my answers? ;)


 Sorry no clue what C++ features will be supported! Just glad 
that a language is acknowledging how important it is to inter-op 
with C++ for once.


C has fundamental design trade offs that make it slower than 
Assembly
(function prologs is just of them), so it wouldn't make much 
sense to port engine code to C unless they wanted it to run 
slower.


 I don't really think this is the same situations. I don't think 
C# is any higher level than C++. Having a GC does not make it 
automatically a higher level language, nor does it make it more 
"productive".


 That said, I think it is much easier to be productive in C# if 
you are starting from scratch, but with the proper setup & in 
depth knowledge of C++, it is every bit as productive(especially 
in games, where things like GC end up as more of a burden).







Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Andrei Alexandrescu via Digitalmars-d

On 9/30/14, 7:05 AM, Foo wrote:

On Tuesday, 30 September 2014 at 13:59:23 UTC, Andrei
Alexandrescu wrote:

On 9/30/14, 6:38 AM, Foo wrote:

I hate the fact that this will produce template bloat for each
function/method.
I'm also in favor of "let the user pick", but I would use a global
variable:


enum MemoryManagementPolicy { gc, rc, mrc }
immutable
gc = ResourceManagementPolicy.gc,
rc = ResourceManagementPolicy.rc,
mrc = ResourceManagementPolicy.mrc;

auto RMP = gc;


and in my code:


RMP = rc;
string str = "foo"; // compiler knows -> ref counted
// ...
RMP = gc;
string str2 = "bar"; // normal behaviour restored



This won't work because the type of "string" is different for RC vs.
GC. -- Andrei


But it would work for phobos functions without template bloat.


How is the fact there's less bloat relevant for code that doesn't work? 
I.e. it doesn't compile. It needs to return string for GC and RCString 
for RC.


Andrei



D Language Specification corrupted.

2014-09-30 Thread Mike James via Digitalmars-d

Hi,

The PDF of the D Language Specification appears to be corrupted - 
it only contains the front index.


Regards, -=mike=-


Re: Local functions infer attributes?

2014-09-30 Thread bachmeier via Digitalmars-d

On Tuesday, 30 September 2014 at 09:13:02 UTC, ixid wrote:
I also suspect Andrei is doing a major project at the moment 
which is making him uncharacteristically harsh in his 
responses, from his POV he's doing something massive to help D 
while the community has gone into a negative mode.


There are only two kinds of languages: the ones people complain 
about and the ones nobody uses. The "negative mode" will likely 
become more negative as D continues to grow in popularity.


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread via Digitalmars-d
On Tuesday, 30 September 2014 at 14:28:49 UTC, Szymon Gatner 
wrote:

On Tuesday, 30 September 2014 at 14:19:51 UTC, Araq wrote:

It doesn't mention anything about moving C++ into C#.
Even with IL2CPP, C# has fundamental design trade offs that 
make it slower than C++(GC is just one of them), so it 
wouldn't make much sense to port engine code to C# unless 
they wanted it to run slower.


What are these fundamental design trade offs?


http://en.wikipedia.org/wiki/List_of_CIL_instructions


Guys I beg you, is there any chance I will get my answers? ;)


Nope :)

I don't think anyone know what extended C++ actually will look 
like.


Some people say  D is going to have std::* support, but that 
would require someone to keep track of changes in all the c++ 
compilers D is supposed to support: Clang, G++, and VC++…


Some people say they want full support for C++ exceptions, some 
say it is too difficult…


However, you don't need std::* or C++ exceptions for a game? Some 
aspects of "extended C++ support" is going to be either wishful 
thinking or non-portable, so you probably should try to avoid 
depending on it.  What are you missing?


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread Szymon Gatner via Digitalmars-d

On Tuesday, 30 September 2014 at 14:19:51 UTC, Araq wrote:

It doesn't mention anything about moving C++ into C#.
Even with IL2CPP, C# has fundamental design trade offs that 
make it slower than C++(GC is just one of them), so it 
wouldn't make much sense to port engine code to C# unless they 
wanted it to run slower.


What are these fundamental design trade offs?


Guys I beg you, is there any chance I will get my answers? ;)


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread Paulo Pinto via Digitalmars-d

On Tuesday, 30 September 2014 at 13:59:01 UTC, po wrote:


This will most likely change when they get their IL2CPP 
production ready and start moving code from C++ to C#.


Or when Microsoft finishes the ongoing work on .NET Native.

--
Paulo


 I'd not seen IL2CPP before, but from this link:

http://blogs.unity3d.com/2014/05/20/the-future-of-scripting-in-unity/

It appears to act as a replacement for the mono VM, allowing 
for AoT compilation of .Net IL into C++, so that they can 
benefit from existing C++ compiler tech.


 It doesn't mention anything about moving C++ into C#.


No, but it is a possibility.

Even with IL2CPP, C# has fundamental design trade offs that 
make it slower than C++(GC is just one of them), so it wouldn't 
make much sense to port engine code to C# unless they wanted it 
to run slower.


C has fundamental design trade offs that make it slower than 
Assembly
(function prologs is just of them), so it wouldn't make much 
sense to port engine code to C unless they wanted it to run 
slower.


I have been through this discussion cycle a few times since the 
Z80.


--
Paulo


Re: GC.sizeOf(array.ptr)

2014-09-30 Thread Dicebot via Digitalmars-d
On Tuesday, 30 September 2014 at 14:01:17 UTC, Steven 
Schveighoffer wrote:
Assertion passes with D1/Tango runtime but fails with current 
D2
runtime. This happens because `result.ptr` is not actually a 
pointer
returned by gc_qalloc from array reallocation, but interior 
pointer 16
bytes from the start of that block. Druntime stores some 
metadata

(length/capacity I presume) in the very beginning.


This is accurate, it stores the "used" size of the array. But 
it's only the case for arrays, not general GC.malloc blocks.


Alternative is to use result.capacity, which essentially looks 
up the same thing (and should be more accurate). But it doesn't 
cover the same inputs.


Why is it stored in the beginning and not in the end of the block 
(like capacity)? I'd like to explore options of removing interior 
pointer completely before proceeding with adding more special 
cases to GC functions.


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread via Digitalmars-d

On Tuesday, 30 September 2014 at 14:05:43 UTC, Foo wrote:

On Tuesday, 30 September 2014 at 13:59:23 UTC, Andrei
Alexandrescu wrote:

On 9/30/14, 6:38 AM, Foo wrote:
This won't work because the type of "string" is different for 
RC vs. GC. -- Andrei


But it would work for phobos functions without template bloat.


Only for internal allocations. If the functions want to return 
something, the type must known.


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread Araq via Digitalmars-d

 It doesn't mention anything about moving C++ into C#.
Even with IL2CPP, C# has fundamental design trade offs that 
make it slower than C++(GC is just one of them), so it wouldn't 
make much sense to port engine code to C# unless they wanted it 
to run slower.


What are these fundamental design trade offs?


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread John Colvin via Digitalmars-d
On Monday, 29 September 2014 at 10:49:53 UTC, Andrei Alexandrescu 
wrote:
Back when I've first introduced RCString I hinted that we have 
a larger strategy in mind. Here it is.


The basic tenet of the approach is to reckon and act on the 
fact that memory allocation (the subject of allocators) is an 
entirely distinct topic from memory management, and more 
generally resource management. This clarifies that it would be 
wrong to approach alternatives to GC in Phobos by means of 
allocators. GC is not only an approach to memory allocation, 
but also an approach to memory management. Reducing it to 
either one is a mistake. In hindsight this looks rather obvious 
but it has caused me and many people better than myself a lot 
of headache.


That said allocators are nice to have and use, and I will 
definitely follow up with std.allocator. However, std.allocator 
is not the key to a @nogc Phobos.


Nor are ranges. There is an attitude that either output ranges, 
or input ranges in conjunction with lazy computation, would 
solve the issue of creating garbage. 
https://github.com/D-Programming-Language/phobos/pull/2423 is a 
good illustration of the latter approach: a range would be 
lazily created by chaining stuff together. A range-based 
approach would take us further than the allocators, but I see 
the following issues with it:


(a) the whole approach doesn't stand scrutiny for non-linear 
outputs, e.g. outputting some sort of associative array or 
really any composite type quickly becomes tenuous either with 
an output range (eager) or with exposing an input range (lazy);


(b) makes the style of programming without GC radically 
different, and much more cumbersome, than programming with GC; 
as a consequence, programmers who consider changing one 
approach to another, or implementing an algorithm neutral to 
it, are looking at a major rewrite;


(c) would make D/@nogc a poor cousin of C++. This is quite out 
of character; technically, I have long gotten used to seeing 
most elaborate C++ code like poor emulation of simple D idioms. 
But C++ has spent years and decades taking to perfection an 
approach without a tracing garbage collector. A departure from 
that would need to be superior, and that doesn't seem to be the 
case with range-based approaches.


===

Now that we clarified that these existing attempts are not 
going to work well, the question remains what does. For Phobos 
I'm thinking of defining and using three policies:


enum MemoryManagementPolicy { gc, rc, mrc }
immutable
gc = ResourceManagementPolicy.gc,
rc = ResourceManagementPolicy.rc,
mrc = ResourceManagementPolicy.mrc;

The three policies are:

(a) gc is the classic garbage-collected style of management;

(b) rc is a reference-counted style still backed by the GC, 
i.e. the GC will still be able to pick up cycles and other 
kinds of leaks.


(c) mrc is a reference-counted style backed by malloc.

(It should be possible to collapse rc and mrc together and make 
the distinction dynamically, at runtime. I'm distinguishing 
them statically here for expository purposes.)


The policy is a template parameter to functions in Phobos (and 
elsewhere), and informs the functions e.g. what types to 
return. Consider:


auto setExtension(MemoryManagementPolicy mmp = gc, R1, R2)(R1 
path, R2 ext)

if (...)
{
static if (mmp == gc) alias S = string;
else alias S = RCString;
S result;
...
return result;
}

On the caller side:

auto p1 = setExtension("hello", ".txt"); // fine, use gc
auto p2 = setExtension!gc("hello", ".txt"); // same
auto p3 = setExtension!rc("hello", ".txt"); // fine, use rc

So by default it's going to continue being business as usual, 
but certain functions will allow passing in a (defaulted) 
policy for memory management.


Destroy!


Andrei


Instead of adding a new template parameter to every function 
(which won't necessarily play nicely with existing IFTI and 
variadic templates), why not allow template modules?


import stringRC = std.string!rc;
import stringGC = std.string!gc;


// in std/string.d
module std.string(MemoryManagementPolicy mmp)

pure @trusted S capitalize(S)(S s)
if (isSomeString!S)
{
//...

static if(mmp == MemoryManagementPolicy.gc)
{
//...
}
else static if ...
}


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Foo via Digitalmars-d

On Tuesday, 30 September 2014 at 13:59:23 UTC, Andrei
Alexandrescu wrote:

On 9/30/14, 6:38 AM, Foo wrote:

I hate the fact that this will produce template bloat for each
function/method.
I'm also in favor of "let the user pick", but I would use a 
global

variable:


enum MemoryManagementPolicy { gc, rc, mrc }
immutable
gc = ResourceManagementPolicy.gc,
rc = ResourceManagementPolicy.rc,
mrc = ResourceManagementPolicy.mrc;

auto RMP = gc;


and in my code:


RMP = rc;
string str = "foo"; // compiler knows -> ref counted
// ...
RMP = gc;
string str2 = "bar"; // normal behaviour restored



This won't work because the type of "string" is different for 
RC vs. GC. -- Andrei


But it would work for phobos functions without template bloat.


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread via Digitalmars-d

On Tuesday, 30 September 2014 at 13:59:01 UTC, po wrote:

 It doesn't mention anything about moving C++ into C#.
Even with IL2CPP, C# has fundamental design trade offs that 
make it slower than C++(GC is just one of them), so it wouldn't 
make much sense to port engine code to C# unless they wanted it 
to run slower.


Yes, the info on CIL instruction set suggests that is a very 
simple IR, which is an advantage if you want to prove safety or 
write portable code, but that also means CIL will have a hard 
time beating llvm. Some performance related decisions have to be 
taken at a higher abstraction level than CIL.


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread po via Digitalmars-d


This will most likely change when they get their IL2CPP 
production ready and start moving code from C++ to C#.


Or when Microsoft finishes the ongoing work on .NET Native.

--
Paulo


 I'd not seen IL2CPP before, but from this link:

http://blogs.unity3d.com/2014/05/20/the-future-of-scripting-in-unity/

It appears to act as a replacement for the mono VM, allowing for 
AoT compilation of .Net IL into C++, so that they can benefit 
from existing C++ compiler tech.


 It doesn't mention anything about moving C++ into C#.
Even with IL2CPP, C# has fundamental design trade offs that make 
it slower than C++(GC is just one of them), so it wouldn't make 
much sense to port engine code to C# unless they wanted it to run 
slower.


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Andrei Alexandrescu via Digitalmars-d

On 9/30/14, 6:38 AM, Foo wrote:

I hate the fact that this will produce template bloat for each
function/method.
I'm also in favor of "let the user pick", but I would use a global
variable:


enum MemoryManagementPolicy { gc, rc, mrc }
immutable
 gc = ResourceManagementPolicy.gc,
 rc = ResourceManagementPolicy.rc,
 mrc = ResourceManagementPolicy.mrc;

auto RMP = gc;


and in my code:


RMP = rc;
string str = "foo"; // compiler knows -> ref counted
// ...
RMP = gc;
string str2 = "bar"; // normal behaviour restored



This won't work because the type of "string" is different for RC vs. GC. 
-- Andrei


Re: GC.sizeOf(array.ptr)

2014-09-30 Thread Steven Schveighoffer via Digitalmars-d

On 9/30/14 9:42 AM, Dicebot wrote:

There is one issue I have encountered during CDGC porting I may need
advice with.
Consider this simple snippet:

```
import core.memory;

void main()
{
 ubyte[] result;
 result.length = 4096;
 assert(GC.sizeOf(result.ptr) > 0);
}
``

Assertion passes with D1/Tango runtime but fails with current D2
runtime. This happens because `result.ptr` is not actually a pointer
returned by gc_qalloc from array reallocation, but interior pointer 16
bytes from the start of that block. Druntime stores some metadata
(length/capacity I presume) in the very beginning.


This is accurate, it stores the "used" size of the array. But it's only 
the case for arrays, not general GC.malloc blocks.


Alternative is to use result.capacity, which essentially looks up the 
same thing (and should be more accurate). But it doesn't cover the same 
inputs.


Trivial note that you only need to allocate 2047 bytes to get this behavior.



As a result GC.sizeOf(array.ptr) results in 0 (being an interior pointer).

Interesting side effect is that this code:

```
void main()
{
 ubyte[] result;
 result.length = 4096;
 GC.free(result.ptr);
}
```

..does not actually free anything for the very same reason (result.ptr
is interior pointer), it just silently succeeds doing nothing.


This is a problem. We should provide a definitive way to dealloc an 
array, if it's not already present (assuming delete goes away).



Is such behaviour intended?


Depends on what part you are talking about ;)

But kidding aside, when I added the array appending update, I didn't 
intend to affect these functions, nor did I consider the cases above. So 
it was not intended to break these things, but I think we can repair 
them, or at least provide alternate means to create the same result.


Interestingly, I think any lookup of block information for an interior 
pointer costs the same as looking up block info for a base pointer. I 
think adding a flag that allows performing operations on interior 
pointers might make this more palatable.


But we absolutely need a way to free an array that does not require the 
knowledge of the inner workings of the runtime.


-Steve


Re: Local functions infer attributes?

2014-09-30 Thread John Colvin via Digitalmars-d
On Tuesday, 30 September 2014 at 07:10:08 UTC, Manu via 
Digitalmars-d wrote:

On 30 September 2014 16:06, Daniel N via Digitalmars-d
 wrote:

On Monday, 29 September 2014 at 04:23:08 UTC, Timon Gehr wrote:


On 09/29/2014 04:43 AM, Walter Bright wrote:



You're right that tuples in D cannot contain storage classes



void foo(ref int x){}
alias p=ParameterTypeTuple!foo;
pragma(msg, p); // (ref int)

(But this does not help.)



Well, only if you are sufficiently desperate. ;)

struct S(alias T)
{
  void f(ParameterTypeTuple!T p)
  {
  }
}

S!((ref int x, int y){}) s;


I have actually thought of that ;) ... but I tend to think that 
only D
users present on this forum are likely to make sense of that 
code, and

why.


Perhaps this might help you a little:

http://code.dlang.org/packages/storageclassutils

sure, it's not as elegant as one would like, but it at least 
provides some basic utility.


GC.sizeOf(array.ptr)

2014-09-30 Thread Dicebot via Digitalmars-d
There is one issue I have encountered during CDGC porting I may 
need advice with.

Consider this simple snippet:

```
import core.memory;

void main()
{
ubyte[] result;
result.length = 4096;
assert(GC.sizeOf(result.ptr) > 0);
}
``

Assertion passes with D1/Tango runtime but fails with current D2 
runtime. This happens because `result.ptr` is not actually a 
pointer returned by gc_qalloc from array reallocation, but 
interior pointer 16 bytes from the start of that block. Druntime 
stores some metadata (length/capacity I presume) in the very 
beginning.


As a result GC.sizeOf(array.ptr) results in 0 (being an interior 
pointer).


Interesting side effect is that this code:

```
void main()
{
ubyte[] result;
result.length = 4096;
GC.free(result.ptr);
}
```

..does not actually free anything for the very same reason 
(result.ptr is interior pointer), it just silently succeeds doing 
nothing.


Is such behaviour intended?


Re: Local functions infer attributes?

2014-09-30 Thread John Colvin via Digitalmars-d

On Tuesday, 30 September 2014 at 13:47:22 UTC, John Colvin wrote:
On Tuesday, 30 September 2014 at 07:10:08 UTC, Manu via 
Digitalmars-d wrote:

On 30 September 2014 16:06, Daniel N via Digitalmars-d
 wrote:
On Monday, 29 September 2014 at 04:23:08 UTC, Timon Gehr 
wrote:


On 09/29/2014 04:43 AM, Walter Bright wrote:



You're right that tuples in D cannot contain storage classes



void foo(ref int x){}
alias p=ParameterTypeTuple!foo;
pragma(msg, p); // (ref int)

(But this does not help.)



Well, only if you are sufficiently desperate. ;)

struct S(alias T)
{
 void f(ParameterTypeTuple!T p)
 {
 }
}

S!((ref int x, int y){}) s;


I have actually thought of that ;) ... but I tend to think 
that only D
users present on this forum are likely to make sense of that 
code, and

why.


Perhaps this might help you a little:

http://code.dlang.org/packages/storageclassutils

sure, it's not as elegant as one would like, but it at least 
provides some basic utility.


Also, I just wrote it in a few hours after seeing your post, so 
obviously there are plenty of improvements that could be made.


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread Paulo Pinto via Digitalmars-d

On Tuesday, 30 September 2014 at 13:29:15 UTC, po wrote:


All true again, pre-allocation can fix lots of pause issues. 
And simply not using GC in tight loops. While not the greatest 
fan of Unity, it proved that GC (on top of of VM) is not a 
concern for (I argue) most of gamedev. Minecraft was 
originally written in Java for crying out loud yet it didn't 
stop it from becoming gigantic success.


 Unity the engine is written in C++. When you create a game 
using Unity you are merely scripting the C++ engine using C#, 
no different than the countless games that do the same using 
Lua.


This will most likely change when they get their IL2CPP 
production ready and start moving code from C++ to C#.


Or when Microsoft finishes the ongoing work on .NET Native.

--
Paulo


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Peter Alexander via Digitalmars-d
On Tuesday, 30 September 2014 at 08:34:26 UTC, Johannes Pfau 
wrote:
What if I don't want automated memory _management_? What if I 
want a

function to use a stack buffer? Or if I want to free manually?


Agreed. This is the common case we need to solve for, but this is 
memory allocation, not management. I'm not sure where manual 
management fits into Andrei's scheme. Andrei, could you give an 
example of, e.g. how toStringz would work with a stack buffer in 
your proposed scheme?


Another thought: if we use a template parameter, what's the story 
for virtual functions (e.g. Object.toString)? They can't be 
templated.




Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread John Colvin via Digitalmars-d
On Tuesday, 30 September 2014 at 10:06:47 UTC, Szymon Gatner 
wrote:

On Tuesday, 30 September 2014 at 09:32:05 UTC, Chris wrote:


It's good to hear that. Maybe you could write a short article 
about that once you've moved to D. "Porting games to D" or 
something like that. With D you can develop fast due to short 
compilation times, that's important for testing and 
prototyping.




I actually was planning on something like that. I am still 
thinking about writing on automatic binding generation between 
D and Lua using D's compile-time reflection. Add a UDA to a 
function, class, method a voila! You can call it from Lua. 
Magic!


I presume you're familiar with http://code.dlang.org/packages/luad


Re: Deprecations: Any reason left for warning stage?

2014-09-30 Thread Kenji Hara via Digitalmars-d
2014-09-27 1:15 GMT+09:00 David Nadlinger via Digitalmars-d <
digitalmars-d@puremagic.com>:

> As Walter mentioned in a recent pull request discussion [1], the first
> formal deprecation protocol we came up with for language changes looked
> something like this:
>
> 1. remove from documentation
> 2. warning
> 3. deprecation
> 4. error
>
> (The "remove from documentation" step is a bit questionable, but that's
> not my point here.)
>
> However, in the meantime deprecations were changed to be informational by
> default. You now need to explicitly need to pass -de to turn them into
> errors that halt compilation. Thus, I think we should simply get rid of the
> warning step, just like we (de facto) eliminated the "scheduled for
> deprecation" stage from the Phobos process.
>
> Thoughts?
>

I agree that the current warning stage for the deprecated features is
useless.

Kenji Hara


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread Chris via Digitalmars-d
On Tuesday, 30 September 2014 at 10:06:47 UTC, Szymon Gatner 
wrote:

On Tuesday, 30 September 2014 at 09:32:05 UTC, Chris wrote:


It's good to hear that. Maybe you could write a short article 
about that once you've moved to D. "Porting games to D" or 
something like that. With D you can develop fast due to short 
compilation times, that's important for testing and 
prototyping.




I actually was planning on something like that. I am still 
thinking about writing on automatic binding generation between 
D and Lua using D's compile-time reflection. Add a UDA to a 
function, class, method a voila! You can call it from Lua. 
Magic!


Great. I'm interested in Lua-D interaction. Would you share it on 
GitHub once it's done?


Have you had a look at DerelictLua: 
https://github.com/DerelictOrg/DerelictLua


Re: Local functions infer attributes?

2014-09-30 Thread via Digitalmars-d

On Tuesday, 30 September 2014 at 13:08:57 UTC, Wyatt wrote:
Semi-tangential to this discussion, but this bit hits on 
something I've been thinking for a little while... ref is, at 
its core, trying to be a non-nullable pointer.  And I get the 
strong sense that it's failing at it.


That's a very perceptive observation. :)

So you also get a sense that D could have non-nullable reference 
semantics for a ref-type?


And possibly extend it to be assignable as a 
const-mutable-reference (pointer) using a dedicated operator or 
using the more ugly "&" notation or a cast?


In Simula you had two assignment operators that made the the 
ref/value issue visually distinct.


a :- b;  //a points to b
x := y;  // x is assigned value of y

It is analogue to the "==" vs "is" distinction that many 
languages make. I think it makes sense to have that visual 
distinction. Makes code easier to browse.




Re: Local functions infer attributes?

2014-09-30 Thread Andrei Alexandrescu via Digitalmars-d

On 9/30/14, 2:13 AM, ixid wrote:

from his POV he's doing something massive to help D while the community
has gone into a negative mode.


Indeed that's an accurate characterization of my POV. -- Andrei


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Foo via Digitalmars-d
I hate the fact that this will produce template bloat for each 
function/method.
I'm also in favor of "let the user pick", but I would use a 
global variable:



enum MemoryManagementPolicy { gc, rc, mrc }
immutable
gc = ResourceManagementPolicy.gc,
rc = ResourceManagementPolicy.rc,
mrc = ResourceManagementPolicy.mrc;

auto RMP = gc;


and in my code:


RMP = rc;
string str = "foo"; // compiler knows -> ref counted
// ...
RMP = gc;
string str2 = "bar"; // normal behaviour restored



Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Foo via Digitalmars-d

On Tuesday, 30 September 2014 at 13:38:43 UTC, Foo wrote:
I hate the fact that this will produce template bloat for each 
function/method.
I'm also in favor of "let the user pick", but I would use a 
global variable:



enum MemoryManagementPolicy { gc, rc, mrc }
immutable
gc = ResourceManagementPolicy.gc,
rc = ResourceManagementPolicy.rc,
mrc = ResourceManagementPolicy.mrc;

auto RMP = gc;


and in my code:


RMP = rc;
string str = "foo"; // compiler knows -> ref counted
// ...
RMP = gc;
string str2 = "bar"; // normal behaviour restored



Of course each method/function in Phobos should use the global
RMP.


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread via Digitalmars-d

On Tuesday, 30 September 2014 at 12:51:25 UTC, Paulo  Pinto wrote:


It works when two big ifs come together.

- inside the same scope (e.g. function level)

- when the referece is not shared between threads.

While it is of limited applicability, Objective-C (and 
eventually Swift) codebases prove it helps in most real life 
use cases.


But Objective-C has thread safe ref-counting?!

If it isn't thread safe it is of very limited utility, you can 
usually get away with unique_ptr in single threaded scenarios.


Re: What are the worst parts of D?

2014-09-30 Thread Bruno Medeiros via Digitalmars-d

On 23/09/2014 20:05, Andrei Alexandrescu wrote:

On 9/23/14, 12:01 PM, Sean Kelly wrote:

On Tuesday, 23 September 2014 at 18:38:08 UTC, Andrei Alexandrescu wrote:


Well put. Again, the two things we need to work on are C++
compatibility and the GC. -- Andrei


Has much thought gone into how we'll address C++ const?


Some. A lot more needs to. -- Andrei


The resurrection of head-const?

--
Bruno Medeiros
https://twitter.com/brunodomedeiros


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Johannes Pfau via Digitalmars-d
Am Tue, 30 Sep 2014 10:47:54 +
schrieb "Vladimir Panteleev" :

> On Tuesday, 30 September 2014 at 08:34:26 UTC, Johannes Pfau 
> wrote:
> > What if I don't want automated memory _management_? What if I 
> > want a
> > function to use a stack buffer? Or if I want to free manually?
> >
> > If I want std.string.toStringz to put the result into a 
> > temporary stack
> > buffer your solution doesn't help at all. Passing an ouput 
> > range,
> > allocator or buffer would all solve this.
> 
> I don't understand, why wouldn't you be able to temporarily set 
> the thread-local allocator to use the stack buffer, and restore 
> it once done?

That's possible but insanely dangerous in case you forget to reset the
thread allocator. Also storing stack pointers in global state (even
thread-local) is dangerous, for example interaction with fibers could
lead to bugs, etc. (What if I set the allocator to a stack allocator
and call a function which yields from a Fiber?).

You also loose all possibilities to use 'scope' or a similar mechanism
to prevent escaping a stack pointer. 

Also a stack buffer is not a complete allocator, but in some
cases like toStringz it works even better than allocators (less
overhead as you know the required buffer size before calling toStringz
and there's only one allocation)

And it is a hack. Of course you can provide a wrapper which does
oldAlloc = threadLocalAllocator;
threadLocalAllocator = stackbuf;
func();
scope(exit)
threadLocalAllocator = oldAlloc;

But how could anybody think this is good API design? I think I'd rather
fork the required Phobos functions instead of using such a wrapper.


Re: Local functions infer attributes?

2014-09-30 Thread Sad panda via Digitalmars-d

On Tuesday, 30 September 2014 at 09:13:02 UTC, ixid wrote:

Otherwise if you like D, then try to
improve it from the inside, writing dmd/Phobos/druntime pull 
requests,

instead of doing it from the outside.


I'd never have my PR's pulled.

I'm also not as interested in language development as it might 
appear.
I'm interested in writing code and getting work done, and 
minimising

friction.
I'm interested in more efficient ways to get my work done, and 
also
opportunities to write more efficient code, but that doesn't 
mean I
want to stop doing my work and instead work on HOW I do my 
work.




I find myself in a very awkward situation where I'm too far
in... I can't go back to C++,



Have you taken a look at Rust?


Yeah, it's just too weird for me to find realistic. It also 
more
rigidly asserts it's opinions on you, which are in many cases, 
not
optimal. Rust typically shows a performance disadvantage, 
which I care

about.
Perhaps more importantly, for practical reasons, I can't ever 
imagine
convincing a studio of hundreds of programmers to switch to 
rust. C++
programmers can learn D by osmosis, but staff retraining 
burden to

move to Rust seems completely unrealistic to me.


You're a vital alternative voice, please try to stick with us. 
The interest your talk and presence generated for D was huge 
and the games industry should be a major target for D. I also 
suspect Andrei is doing a major project at the moment which is 
making him uncharacteristically harsh in his responses, from 
his POV he's doing something massive to help D while the 
community has gone into a negative mode.


+10 <3

Pardon the pandering, but I actually see Andrei Walter and you as
making up the trinity of idealism, codegen pragmatism and
industry use respectively.


Re: Before we implement SDL package format for DUB

2014-09-30 Thread Bruno Medeiros via Digitalmars-d

On 25/08/2014 17:40, Jonathan Marler wrote:

Hello everyone,

I've been working on SDL support for DUB and wanted to get some people's
opinions on whether we should really use SDL.  I've posted my thoughts
here:
http://forum.rejectedsoftware.com/groups/rejectedsoftware.dub/thread/2263/


My opinion on this subject had been a preference that we use an 
extension to JSON (something like lenient JSON, for example: 
http://developer.android.com/reference/android/util/JsonReader.html#setLenient%28boolean%29 
)
I don't like SDL much (because it's not that well-known, is whitespace 
sensitive, and other reasons). But that's mostly a personal preference, 
SDL is not a bad choice either.


Still, when I read that ASON was being developed, as a possible 
alternative, it looked good, since ASON seemed similar to lenient-JSON. 
But then things took a turn to the worse. Nameless fields, because it 
makes the parsing application specific, is a bad idea. The data format 
should be universal, not application specific. The table feature is 
another thing I think is unnecessary, we should try to the keep the data 
format fairly simple (there's only a few things over plain JSON that I 
think are worth improving). But I think the real dealbreaker is being 
application specific. Given this, I would prefer even SDL to ASON.



--
Bruno Medeiros
https://twitter.com/brunodomedeiros


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Paulo Pinto via Digitalmars-d
On Tuesday, 30 September 2014 at 12:32:08 UTC, Ola Fosheim 
Grøstad wrote:
On Tuesday, 30 September 2014 at 12:02:10 UTC, Johannes Pfau 
wrote:

...


> Also the idea exposed in this thread that release()/retain() 
> is
purely arithmetic and can be optimized as such is quite wrong. 
retain() is conceptually a locking construct on a memory region 
that prevents reuse. I've made a case for TSX, but one can 
probably come up with other multi-threaded examples.




It works when two big ifs come together.

- inside the same scope (e.g. function level)

- when the referece is not shared between threads.

While it is of limited applicability, Objective-C (and eventually 
Swift) codebases prove it helps in most real life use cases.


--
Paulo


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread Szymon Gatner via Digitalmars-d
On Monday, 29 September 2014 at 14:36:10 UTC, Jacob Carlborg 
wrote:

On 29/09/14 12:00, Szymon Gatner wrote:

Hi,

recently there is much talk about extending C++ interop in D 
but it is
unclear to me what that means. Functions and virtual class 
methods are
already callable. What else is planned in the near future? 
Exceptions?

Support for C++ templates? (that seems difficult no?).


Using templates are already supported. Note, they need to 
instantiated on the C++ side.


Ah, cool, but I still have no clue what to expect from ongoing 
discussion on C++ interop. Does anyone know?




[1] https://github.com/D-Programming-Language/dmd/pull/3987


Yup, I saw it and this makes me very happy. iOS run-time is still 
an issue tho.


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread po via Digitalmars-d


All true again, pre-allocation can fix lots of pause issues. 
And simply not using GC in tight loops. While not the greatest 
fan of Unity, it proved that GC (on top of of VM) is not a 
concern for (I argue) most of gamedev. Minecraft was originally 
written in Java for crying out loud yet it didn't stop it from 
becoming gigantic success.


 Unity the engine is written in C++. When you create a game using 
Unity you are merely scripting the C++ engine using C#, no 
different than the countless games that do the same using Lua.




Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread Szymon Gatner via Digitalmars-d

On Tuesday, 30 September 2014 at 11:46:30 UTC, Chris wrote:


Have you had a look at DerelictLua: 
https://github.com/DerelictOrg/DerelictLua


Forgot to reply to 2nd part: yes I looked at it and in fact I
tried my code using it.


Re: Local functions infer attributes?

2014-09-30 Thread Wyatt via Digitalmars-d
On Tuesday, 30 September 2014 at 11:45:37 UTC, Ola Fosheim 
Grøstad wrote:


(it is only pointers after all).


Semi-tangential to this discussion, but this bit hits on 
something I've been thinking for a little while... ref is, at its 
core, trying to be a non-nullable pointer.  And I get the strong 
sense that it's failing at it.


-Wyatt


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread Szymon Gatner via Digitalmars-d

On Tuesday, 30 September 2014 at 11:46:30 UTC, Chris wrote:


Great. I'm interested in Lua-D interaction. Would you share it 
on GitHub once it's done?


Have you had a look at DerelictLua: 
https://github.com/DerelictOrg/DerelictLua


I was thinking about maybe just posting snippets on the blog but 
GitHub should be doable. I am not much of a blogger tho... 
Anyway, I would be nothing new to D programmer but I think it 
would be quite interesting for C++ programmer dealing with 
variadic marcos and boost.mpl for (the half of) similar result.


Re: Local functions infer attributes?

2014-09-30 Thread bearophile via Digitalmars-d

ixid:

It might be an effective argument to give bearophile some of 
the problematic code and see what his idiomatic D version looks 
like and if what you're after is elegantly achievable.


Manu is quite more expert than me in the kind of code he writes. 
So what you propose is just going to show my limits/ignorance...


Bye,
bearophile


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread Johnathan via Digitalmars-d
On Tuesday, 30 September 2014 at 08:48:19 UTC, Szymon Gatner 
wrote:
I realize AAA's have have their reasons against GC i but in 
that case one should probably just get UE4 license anyway.


UE4 uses a GC internally. The issue with using D's GC for games 
is a matter of quality/adaptability.
Allocations in games should be rare (and after game startup, 
should mostly be small objects, if there's any allocations at 
all), so a GC for games would need minimal pauses and extremely 
quick small allocations.


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread Szymon Gatner via Digitalmars-d

On Tuesday, 30 September 2014 at 09:53:41 UTC, Johnathan wrote:
On Tuesday, 30 September 2014 at 08:48:19 UTC, Szymon Gatner 
wrote:
I realize AAA's have have their reasons against GC i but in 
that case one should probably just get UE4 license anyway.


UE4 uses a GC internally. The issue with using D's GC for games 
is a matter of quality/adaptability.


True, but not in the sense that it is using GC-based language. It 
is custom C++ solution tailored for the purpose.


Allocations in games should be rare (and after game startup, 
should mostly be small objects, if there's any allocations at 
all), so a GC for games would need minimal pauses and extremely 
quick small allocations.


All true again, pre-allocation can fix lots of pause issues. And 
simply not using GC in tight loops. While not the greatest fan of 
Unity, it proved that GC (on top of of VM) is not a concern for 
(I argue) most of gamedev. Minecraft was originally written in 
Java for crying out loud yet it didn't stop it from becoming 
gigantic success.




Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread via Digitalmars-d
On Tuesday, 30 September 2014 at 12:02:10 UTC, Johannes Pfau 
wrote:
That's possible but insanely dangerous in case you forget to 
reset the
thread allocator. Also storing stack pointers in global state 
(even
thread-local) is dangerous, for example interaction with fibers 
could
lead to bugs, etc. (What if I set the allocator to a stack 
allocator

and call a function which yields from a Fiber?).

You also loose all possibilities to use 'scope' or a similar 
mechanism

to prevent escaping a stack pointer.


Yes, I agree. One option would be to have thread-local region 
allocator that can only be used for "scoped" allocation. That is, 
only for allocations that are not assigned to globals or can get 
stuck in fibers and that are returned to the calling function. 
That way the context can free the region when done and you can 
get away with little allocation overhead if used prudently.


I also don't agree with the sentiment that allocation/management 
can be kept fully separate. If you have a region allocator that 
is refcounted it most certainly is interrelated with a fairly 
tight coupling.


Also the idea exposed in this thread that release()/retain() is 
purely arithmetic and can be optimized as such is quite wrong. 
retain() is conceptually a locking construct on a memory region 
that prevents reuse. I've made a case for TSX, but one can 
probably come up with other multi-threaded examples.


These hacks are not making D more attractive to people who find 
C++ lacking in elegance.


Actually, creating a phobos light with nothrow, nogc, a light 
runtime and basic building blocks such as intrinsics to build 
your own RC with compiler support sounds like a more interesting 
option.


I am really not interested in library provided allocators or RC. 
If I am not going to use malloc/GC then I want to write my own 
and have dedicated allocators for the most common objects.


I think it is quite reasonable that people who want to take the 
difficult road of not using GC at all also have to do some extra 
work, but provide a clean slate to work from!




Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Vladimir Panteleev via Digitalmars-d
On Tuesday, 30 September 2014 at 08:34:26 UTC, Johannes Pfau 
wrote:
What if I don't want automated memory _management_? What if I 
want a

function to use a stack buffer? Or if I want to free manually?

If I want std.string.toStringz to put the result into a 
temporary stack
buffer your solution doesn't help at all. Passing an ouput 
range,

allocator or buffer would all solve this.


I don't understand, why wouldn't you be able to temporarily set 
the thread-local allocator to use the stack buffer, and restore 
it once done?


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Andrei Alexandrescu via Digitalmars-d

On 9/30/14, 3:41 AM, Peter Alexander wrote:

On Tuesday, 30 September 2014 at 08:34:26 UTC, Johannes Pfau wrote:

What if I don't want automated memory _management_? What if I want a
function to use a stack buffer? Or if I want to free manually?


Agreed. This is the common case we need to solve for, but this is memory
allocation, not management. I'm not sure where manual management fits
into Andrei's scheme. Andrei, could you give an example of, e.g. how
toStringz would work with a stack buffer in your proposed scheme?


There would be no possibility to do that. I mean it's not there but it 
can be added e.g. as a "manual" option of performing memory management. 
The "manual" overloads for functions would require an output range 
parameter. Not all functions might support a "manual" option - that'd be 
rejected statically.



Another thought: if we use a template parameter, what's the story for
virtual functions (e.g. Object.toString)? They can't be templated.


Good point. We need to think about that.


Andrei




Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Marco Leise via Digitalmars-d
Am Mon, 29 Sep 2014 15:04:03 -0700
schrieb Andrei Alexandrescu :

> On 9/29/14, 10:16 AM, Paulo Pinto wrote:
> > Personally, I would go just for (b) with compiler support for
> > increment/decrement removal, as I think it will be too complex having to
> > support everything and this will complicate all libraries.
> 
> Compiler already knows (after inlining) that ++i and --i cancel each 
> other, so we should be in good shape there. -- Andrei

That helps with very small, inlined functions until Marc
Schütz's work on borrowed pointers makes it redundant by
unifying scoped copies of GC, RC and stack pointers.
In any case inc/dec elision is an optimization and and not an
enabling feature. It sure is on the radar and can be improved
later on.

-- 
Marco



Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread Paulo Pinto via Digitalmars-d
On Tuesday, 30 September 2014 at 08:48:19 UTC, Szymon Gatner 
wrote:

On Monday, 29 September 2014 at 20:15:06 UTC, bachmeier wrote:
On Monday, 29 September 2014 at 10:00:27 UTC, Szymon Gatner 
wrote:



Is that all it would take? Do you also need a GC-free standard 
library, which seems to be the need of all the others saying 
"do this and I'll switch from C++"? Are the tools good enough?


Considered how many games (and I don't mean indie anymore, but 
for example Blizzard's Heartstone) are now created in Unity 
which uses not only GC but runs in Mono I am very skeptical of 
anybody claiming GC is a no-go for games. - Especially- that 
native executable is being built in case of D.


BlueByte latest games are done in Flash.

There is an article on the German magazine Making Games about 
their experience switching from being hardcore C++ guys into the 
world of Flash and Stage3D.




I realize AAA's have have their reasons against GC i but in 
that case one should probably just get UE4 license anyway.


Which also offers a C++ GC API, although quite limited from what 
I have read.


Sometimes it seems the whole Assembly vs C vs Pascal, followed by 
C vs C++ performance discussions all over again.



--
Paulo


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread Szymon Gatner via Digitalmars-d

On Tuesday, 30 September 2014 at 10:39:53 UTC, John Colvin wrote:
On Tuesday, 30 September 2014 at 10:06:47 UTC, Szymon Gatner 
wrote:

On Tuesday, 30 September 2014 at 09:32:05 UTC, Chris wrote:


It's good to hear that. Maybe you could write a short article 
about that once you've moved to D. "Porting games to D" or 
something like that. With D you can develop fast due to short 
compilation times, that's important for testing and 
prototyping.




I actually was planning on something like that. I am still 
thinking about writing on automatic binding generation between 
D and Lua using D's compile-time reflection. Add a UDA to a 
function, class, method a voila! You can call it from Lua. 
Magic!


I presume you're familiar with 
http://code.dlang.org/packages/luad


I am, but it is incredible how much of the binding-code can be 
generated with just few lines of D.


Please, does anybody know anything about my original question? :P



Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Jacob Carlborg via Digitalmars-d

On 30/09/14 14:29, Andrei Alexandrescu wrote:


Good point. We need to think about that.


Weren't all methods in Object supposed to be lifted out from Object anyway?

--
/Jacob Carlborg


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread Chris via Digitalmars-d
On Tuesday, 30 September 2014 at 08:48:19 UTC, Szymon Gatner 
wrote:

On Monday, 29 September 2014 at 20:15:06 UTC, bachmeier wrote:
On Monday, 29 September 2014 at 10:00:27 UTC, Szymon Gatner 
wrote:



Is that all it would take? Do you also need a GC-free standard 
library, which seems to be the need of all the others saying 
"do this and I'll switch from C++"? Are the tools good enough?


Considered how many games (and I don't mean indie anymore, but 
for example Blizzard's Heartstone) are now created in Unity 
which uses not only GC but runs in Mono I am very skeptical of 
anybody claiming GC is a no-go for games. - Especially- that 
native executable is being built in case of D.


Very interesting.

I realize AAA's have have their reasons against GC i but in 
that case one should probably just get UE4 license anyway.


Tooling is acceptable for me tbh. Coming from C++ I don't have 
high expectations anyway. The only good debugger (for C++) is 
VC++ and so far I'v had surprisingly good experience with 
VisualD and mixed C++/D application. Stepping into function 
(between language boundries!) just works. Viewing variable 
values works properly too whether I in on *.cpp or .d file atm. 
Overall, can't complain too much especially I am getting all 
those goodies for free ;)


Anyway, I accept that I would be an early adopter and I am OK 
with some cons that come with it as I see more gains overall.


Btw, I think D is THE language to implement gameplay. 
Compilation times make it on par with scripting languages and 
since it becomes compiled there are no JIT restrictions on iOS 
for example. In our case AI will get rewritten from C++/Lua to 
D as soon as it is practical which s not just yet unfortunately.


It's good to hear that. Maybe you could write a short article 
about that once you've moved to D. "Porting games to D" or 
something like that. With D you can develop fast due to short 
compilation times, that's important for testing and prototyping.


iOS/ARM are very important. What's the latest state of affairs? I 
know some progress has been made but it has been off my radar for 
a month or two now.




I don't think anyone is saying C++ interop is unimportant. 
There are a lot of us already using the language and we don't 
think C++ interop is the only thing that has value. More 
important IMO would be releasing a compiler without a bunch of 
regressions. D is a lot more than a C++ replacement for 
Facebook or video game developers.


Don't get me wrong, I too want all those issue resolved, just 
saying for myself that (lack) of those features blocks us from 
adopting at all. And after we're on board I suspect I will join 
some other unhappy camp :P But for now we can't even get there.




Re: Program logic bugs vs input/environmental errors

2014-09-30 Thread Steven Schveighoffer via Digitalmars-d

On 9/29/14 3:44 PM, Jeremy Powers via Digitalmars-d wrote:

On Mon, Sep 29, 2014 at 12:28 PM, Sean Kelly via Digitalmars-d
mailto:digitalmars-d@puremagic.com>> wrote:

Checked exceptions are good in theory but they failed utterly in
Java.  I'm not interested in seeing them in D.


I've heard this before, but have not seen a reasonable argument as to
why they are a failure.  Last time this was discussed a link to a blog
was provided, with lots of discussion there - which as far as I could
tell boiled down to 'catching exceptions is ugly, and people just do the
wrong thing anyway which is ugly when you have checked exceptions.'

I am unlucky enough to write Java all day, and from my standpoint
checked exceptions are a huge win.  There are certain edges which can
catch you, but they are immensely useful in developing robust programs.
Basically checked exceptions -> recoverable problems, unchecked ->
unrecoverable/programming errors (like asserts or memory errors).


Well, the failure comes from the effort to effect a certain behavior.

Sun was looking to make programmers more diligent about handling errors. 
However, humans are lazy worthless creatures. What ends up happening is, 
the compiler complains they aren't handling an exception. They can't see 
any reason why the exception would occur, so they simply catch and 
ignore it to shut the compiler up.


In 90% of cases, they are right -- the exception will not occur. But 
because they have been "trained" to simply discard exceptions, it ends 
up defeating the purpose for the 10% of the time that they are wrong.


If you have been able to resist that temptation and handle every 
exception, then I think you are in the minority. But I have no evidence 
to back this up, it's just a belief.



Note I am not advocating adding checked exceptions to D (though I would
like it).  Point is to acknowledge that there are different kinds of
exceptions, and an exception for one part of the code may not be a
problem for the bit that invokes it.



I think this is appropriate for a lint tool for those out there like 
yourself who want that information. But requiring checked exceptions is 
I think a futile attempt to outlaw natural human behavior.


-Steve


Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Andrei Alexandrescu via Digitalmars-d

On 9/30/14, 1:34 AM, Johannes Pfau wrote:

So you propose RC + global/thread local allocators as the solution for
all memory related problems as 'memory management is not allocation'.
And you claim that using output ranges / providing buffers / allocators
is not an option because it only works in some special cases?


Correct. I assume you meant an irony/sarcasm somewhere :o).


What if I don't want automated memory _management_? What if I want a
function to use a stack buffer? Or if I want to free manually?

If I want std.string.toStringz to put the result into a temporary stack
buffer your solution doesn't help at all. Passing an ouput range,
allocator or buffer would all solve this.


Correct. The output of toStringz would be either a GC string or an RC 
string.



Andrei



Re: RFC: moving forward with @nogc Phobos

2014-09-30 Thread Andrei Alexandrescu via Digitalmars-d

On 9/30/14, 3:47 AM, Vladimir Panteleev wrote:

On Tuesday, 30 September 2014 at 08:34:26 UTC, Johannes Pfau wrote:

What if I don't want automated memory _management_? What if I want a
function to use a stack buffer? Or if I want to free manually?

If I want std.string.toStringz to put the result into a temporary stack
buffer your solution doesn't help at all. Passing an ouput range,
allocator or buffer would all solve this.


I don't understand, why wouldn't you be able to temporarily set the
thread-local allocator to use the stack buffer, and restore it once done?


That's doable, but you don't get to place the string at a _specific_ 
buffer. -- Andrei


Re: Local functions infer attributes?

2014-09-30 Thread via Digitalmars-d
On Tuesday, 30 September 2014 at 00:03:40 UTC, Manu via 
Digitalmars-d wrote:
I've affected all of the change that I am capable of, and I 
need to
decide if I'm happy with D as is, or not, rather than maintain 
my

ambient frustration that it's sitting at 99%, with the last 1%
unattainable to me unless I fork the language for my own use :/


Try to add some rudimentary support for ref in the compiler 
yourself?


It is not a big semantic change (it is only pointers after all).


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread Szymon Gatner via Digitalmars-d

On Monday, 29 September 2014 at 20:15:06 UTC, bachmeier wrote:
On Monday, 29 September 2014 at 10:00:27 UTC, Szymon Gatner 
wrote:



Is that all it would take? Do you also need a GC-free standard 
library, which seems to be the need of all the others saying 
"do this and I'll switch from C++"? Are the tools good enough?


Considered how many games (and I don't mean indie anymore, but 
for example Blizzard's Heartstone) are now created in Unity which 
uses not only GC but runs in Mono I am very skeptical of anybody 
claiming GC is a no-go for games. - Especially- that native 
executable is being built in case of D.


I realize AAA's have have their reasons against GC i but in that 
case one should probably just get UE4 license anyway.


Tooling is acceptable for me tbh. Coming from C++ I don't have 
high expectations anyway. The only good debugger (for C++) is 
VC++ and so far I'v had surprisingly good experience with VisualD 
and mixed C++/D application. Stepping into function (between 
language boundries!) just works. Viewing variable values works 
properly too whether I in on *.cpp or .d file atm. Overall, can't 
complain too much especially I am getting all those goodies for 
free ;)


Anyway, I accept that I would be an early adopter and I am OK 
with some cons that come with it as I see more gains overall.


Btw, I think D is THE language to implement gameplay. Compilation 
times make it on par with scripting languages and since it 
becomes compiled there are no JIT restrictions on iOS for 
example. In our case AI will get rewritten from C++/Lua to D as 
soon as it is practical which s not just yet unfortunately.





I don't think anyone is saying C++ interop is unimportant. 
There are a lot of us already using the language and we don't 
think C++ interop is the only thing that has value. More 
important IMO would be releasing a compiler without a bunch of 
regressions. D is a lot more than a C++ replacement for 
Facebook or video game developers.


Don't get me wrong, I too want all those issue resolved, just 
saying for myself that (lack) of those features blocks us from 
adopting at all. And after we're on board I suspect I will join 
some other unhappy camp :P But for now we can't even get there.





  1   2   >