Re: More radical ideas about gc and reference counting

2014-05-17 Thread via Digitalmars-d

On Friday, 16 May 2014 at 16:58:38 UTC, Rainer Schuetze wrote:



On 14.05.2014 12:56, Marc Schütz schue...@gmx.net wrote:

On Tuesday, 13 May 2014 at 19:50:52 UTC, Rainer Schuetze wrote:

class C { C next; }

shared(C) list;

C newC()
{
   return new C; // allocated on the local heap?


Yes, because it doesn't say new shared(C).


}

void prependToList(C c)
{
   synchronized(list)
   {
   C lst = cast(C) list;
   c.next = lst;  // anything special happening here?
   list = cast(shared(C)) c;  // and here?


The casts are a problem. You're basically lying to the 
compiler, so this

cannot work. Instead, it should be something like:

auto newC() {
return new isolated(C);
}

void prependToList(isolated(C) c) {
synchronized(list) {
// transfer to shared heap
// afterwards, c is no longer usable (consumed)
shared(C) tmp = c.makeShared();
tmp.next = list;
list = tmp;
}
}



Sorry for the late reply.

I currently don't know of a feasible way to replace casting 
away shared if you want to do more than just tmp.next = 
list;, for example call bool check(C c); on tmp or list? I 
can't implement this with a shared object, because any 
operations on shared tend to not exist (ok, maybe I could for 
this very simple example, so consider C contains a container to 
operate on). Also, the object is already locked, I should not 
have to do it again in a function if it only accepts a shared C.


How can I prepend something to the list that is not isolated? I 
guess I cannot, as it might create references to same object 
both shared and unshared.


My gut feeling tells me that this ownership handling through 
the type system might work, but it is very restrictive and does 
not really contribute to the first goal of D listed on the 
front page: convenience.


I haven't thought it through completely, of course, but I believe 
it doesn't have to be inconvenient. Maybe there is a nice 
solution. I already have something in mind, but I have to think 
it over first.


Re: More radical ideas about gc and reference counting

2014-05-17 Thread Jerry via Digitalmars-d
Rainer Schuetze r.sagita...@gmx.de writes:

 On 14.05.2014 12:56, Marc Schütz schue...@gmx.net wrote:
 On Tuesday, 13 May 2014 at 19:50:52 UTC, Rainer Schuetze wrote:
 class C { C next; }

 shared(C) list;

 C newC()
 {
 return new C; // allocated on the local heap?

 Yes, because it doesn't say new shared(C).

 }

 void prependToList(C c)
 {
 synchronized(list)
 {
 C lst = cast(C) list;
 c.next = lst;  // anything special happening here?
 list = cast(shared(C)) c;  // and here?

 The casts are a problem. You're basically lying to the compiler, so this
 cannot work. Instead, it should be something like:

  auto newC() {
  return new isolated(C);
  }

  void prependToList(isolated(C) c) {
  synchronized(list) {
  // transfer to shared heap
  // afterwards, c is no longer usable (consumed)
  shared(C) tmp = c.makeShared();
  tmp.next = list;
  list = tmp;
  }
  }


 Sorry for the late reply.

 I currently don't know of a feasible way to replace casting away shared if
 you want to do more than just tmp.next = list;, for example call bool
 check(C c); on tmp or list? I can't implement this with a shared object,
 because any operations on shared tend to not exist (ok, maybe I could for
 this very simple example, so consider C contains a container to operate
 on). Also, the object is already locked, I should not have to do it again in a
 function if it only accepts a shared C.

 How can I prepend something to the list that is not isolated? I guess I
 cannot, as it might create references to same object both shared and unshared.

 My gut feeling tells me that this ownership handling through the type system
 might work, but it is very restrictive and does not really contribute to the
 first goal of D listed on the front page: convenience.

Reading this, I had a thought.  What if synchronized temporarily
stripped the shared qualifier from list?  At the end of the synchronized
block, shared is restored.  Then:

void prependToList(C c) {
synchronized(list) {   // list is now C, not shared(C)
c.next = list; // ok
list = c;  // ok
check(list);   // ok, not a shared object here
}
// list is now shared again.  move list to shared heap if not already
}

I'm sure I'm missing something, but using shared within a synchronized
block might be saner if this worked...

Jerry



Re: More radical ideas about gc and reference counting

2014-05-16 Thread Rainer Schuetze via Digitalmars-d



On 14.05.2014 12:56, Marc Schütz schue...@gmx.net wrote:

On Tuesday, 13 May 2014 at 19:50:52 UTC, Rainer Schuetze wrote:

class C { C next; }

shared(C) list;

C newC()
{
return new C; // allocated on the local heap?


Yes, because it doesn't say new shared(C).


}

void prependToList(C c)
{
synchronized(list)
{
C lst = cast(C) list;
c.next = lst;  // anything special happening here?
list = cast(shared(C)) c;  // and here?


The casts are a problem. You're basically lying to the compiler, so this
cannot work. Instead, it should be something like:

 auto newC() {
 return new isolated(C);
 }

 void prependToList(isolated(C) c) {
 synchronized(list) {
 // transfer to shared heap
 // afterwards, c is no longer usable (consumed)
 shared(C) tmp = c.makeShared();
 tmp.next = list;
 list = tmp;
 }
 }



Sorry for the late reply.

I currently don't know of a feasible way to replace casting away 
shared if you want to do more than just tmp.next = list;, for 
example call bool check(C c); on tmp or list? I can't implement this 
with a shared object, because any operations on shared tend to not 
exist (ok, maybe I could for this very simple example, so consider C 
contains a container to operate on). Also, the object is already locked, 
I should not have to do it again in a function if it only accepts a 
shared C.


How can I prepend something to the list that is not isolated? I guess I 
cannot, as it might create references to same object both shared and 
unshared.


My gut feeling tells me that this ownership handling through the type 
system might work, but it is very restrictive and does not really 
contribute to the first goal of D listed on the front page: convenience.





This uses the isolated concept suggested by deadalnix here:
http://forum.dlang.org/thread/yiwcgyfzfbkzcavuq...@forum.dlang.org?page=1


}
}

void callMeFromMultipleThreads()
{
prependToList(newC());
}




Re: More radical ideas about gc and reference counting

2014-05-15 Thread Jacob Carlborg via Digitalmars-d

On 14/05/14 23:47, Dmitry Olshansky wrote:


This is curious:
http://burntsushi.net/rustdoc/regex/


It seems they have compile time regular expressions too.

--
/Jacob Carlborg


Re: More radical ideas about gc and reference counting

2014-05-15 Thread Dicebot via Digitalmars-d

On Thursday, 15 May 2014 at 06:30:40 UTC, Jacob Carlborg wrote:

On 14/05/14 23:47, Dmitry Olshansky wrote:


This is curious:
http://burntsushi.net/rustdoc/regex/


It seems they have compile time regular expressions too.


Yeah looks like my knowledge about their macro system is very 
outdated. Looks promising.


Re: More radical ideas about gc and reference counting

2014-05-14 Thread via Digitalmars-d

On Tuesday, 13 May 2014 at 19:50:52 UTC, Rainer Schuetze wrote:



On 13.05.2014 13:09, Marc Schütz schue...@gmx.net wrote:

On Tuesday, 13 May 2014 at 06:06:40 UTC, Rainer Schuetze wrote:



On 12.05.2014 13:53, Marc Schütz schue...@gmx.net wrote:


I'm surprised that you didn't include:

3. Thread-local GC, isolated zones (restricting where 
references to
objects of a particular heap can be placed), exempting 
certain threads

from GC completely, ...


This comes up from time to time, but to me it is very blurry 
how this

can work in reality.

Considering how shared is supposed to be used to be useful 
(do some

locking, then cast away shared) there is no guarantee by the
language that any object is actually thread local (no 
references from
other threads). Working with immutable (e.g. strings) is 
shared by

design.


Yes, but only a part of the data is shared. I suspect the 
majority of
the data in typical programs will be thread-local. If you use 
a message
passing model, you can improve that even further (though it 
requires a
way to move an object to another thread's heap). This way, you 
can - in

the best case - avoid the shared heap completely.


Possible, but this is with a language without shared data 
(including immutable), not D. Safely transferring a large 
object tree from one thread to another will be very expensive. 
If you try to optimize that, the D compiler won't help you to 
guarantee memory safety.


It certainly requires a few modifications to the language. That's 
why I think the current discussions about uniqueness, scope, 
isolated are so important, because it would fit together very 
well with memory management concepts like ARC, isolated heaps, 
and safety. For example, if the uniqueness concept that Walter 
recently added to DMD were not only temporary (only used for 
implicit conversion to immutable, AFAIU), but a real, permanent 
type modifier, transferring such a tree can be made safe at 
practically no runtime costs, apart from notifying the GC about 
the changed ownership. (No copying needs to happen, because the 
important thing for thread-local heaps is actually just in which 
thread references to objects can be stored, not necessarily that 
the actual objects are placed into physically distinct heaps.)




Actually I don't have much experience with shared (I usually 
use __gshared if I need a shared global). Maybe I understand 
the idea better if you show what happens with this code when 
run with thread local GC:


class C { C next; }

shared(C) list;

C newC()
{
return new C; // allocated on the local heap?


Yes, because it doesn't say new shared(C).


}   

void prependToList(C c)
{
synchronized(list)
{
C lst = cast(C) list;
c.next = lst;  // anything special happening here?
list = cast(shared(C)) c;  // and here?


The casts are a problem. You're basically lying to the compiler, 
so this cannot work. Instead, it should be something like:


auto newC() {
return new isolated(C);
}

void prependToList(isolated(C) c) {
synchronized(list) {
// transfer to shared heap
// afterwards, c is no longer usable (consumed)
shared(C) tmp = c.makeShared();
tmp.next = list;
list = tmp;
}
}

This uses the isolated concept suggested by deadalnix here:
http://forum.dlang.org/thread/yiwcgyfzfbkzcavuq...@forum.dlang.org?page=1


}
}

void callMeFromMultipleThreads()
{
prependToList(newC());
}




Re: More radical ideas about gc and reference counting

2014-05-14 Thread bearophile via Digitalmars-d

Walter Bright:


On 5/12/2014 10:25 AM, bearophile wrote:

Walter Bright:

Unions of pointers are so rare in actual code that treating 
them

conservatively is not a big problem.


std.variant.Algebraic is based on on std.variant.VariantN, and 
on
std.variant.VariantN is based on an union, and often you use 
algebraic data
types to represent trees and similar data structures that 
contain many
references/pointers. Adding Adding an onGC() method to 
std.variant.VariantN you

allow the GC to manage Algebraic well enough.


BTW, the RTinfo can be used to discriminate unions.


Do I have to open a Bugzilla entry asking for VariantN to use 
such RTinfo to allow a more precise tracing of pointer-heavy 
Algebraics? Suggestions are welcome.


Bye,
bearophile


Re: More radical ideas about gc and reference counting

2014-05-14 Thread Dicebot via Digitalmars-d

On Tuesday, 13 May 2014 at 21:16:55 UTC, Timon Gehr wrote:

On 05/13/2014 09:07 PM, Jacob Carlborg wrote:

On 2014-05-13 15:56, Dicebot wrote:

Judging by 
http://static.rust-lang.org/doc/0.6/tutorial-macros.html
those are not full-blown AST macros like ones you have been 
proposing,

more like hygienic version of C macros.


Hmm, I haven't looked at Rust macros that much.



Again, the following is an example of Rust macros in action. A 
bf program is compiled to Rust code at compile time.


https://github.com/huonw/brainfuck_macro/blob/master/lib.rs

Compile-time computations create an AST which is then spliced. 
Seems full-blown enough to me and not at all like C macros.


Comments look promising but unfortunately I can't understand a 
single line of actual code :) Mine rust-fu is too weak.


Re: More radical ideas about gc and reference counting

2014-05-14 Thread Wyatt via Digitalmars-d

On Tuesday, 13 May 2014 at 19:56:20 UTC, Rainer Schuetze wrote:


I just read the first chapters, and according to that, existing 
local gcs needs write barriers, so we are back to my second 
proposal. The implementation in the paper even adds read 
barriers.


At this point, I suspect write barriers are simply a requirement 
for any modern scalable GC.  At the very least, the algorithms 
they enable are important if D is really committed to the 
pluggable GC concept.  (Given the ongoing discussion on 
performance tradeoffs of various GC methods and the needs of 
different groups, I'd even suggest it's a really important 
feature.)


To me, shared is a type modifier that doesn't implicitely 
convert to anything else without casting.


Interesting, maybe this should be clarified better.  Having 
skimmed back over chapter 13 of TDPL, my understanding of its 
semantics are that it only really enforces atomicity and 
execution order.  Also, this bit from near the beginning of 13.12 
states:
For all numeric types and function pointers, shared-qualified 
values are convertible implicitly to and from unqualified values.


That sounds kind of at-odds with your interpretation...? :/

-Wyatt


Re: More radical ideas about gc and reference counting

2014-05-14 Thread Timon Gehr via Digitalmars-d

On 05/14/2014 06:41 PM, Wyatt wrote:



To me, shared is a type modifier that doesn't implicitely convert to
anything else without casting.


Interesting, maybe this should be clarified better.  Having skimmed back
over chapter 13 of TDPL, my understanding of its semantics are that it
only really enforces atomicity and execution order.  Also, this bit from
near the beginning of 13.12 states:
For all numeric types and function pointers, shared-qualified values
are convertible implicitly to and from unqualified values.

That sounds kind of at-odds with your interpretation...? :/


Not too much. Those are trivial cases (one copies the entire qualified 
data, so obviously one is free to change qualifiers as one wishes).


Re: More radical ideas about gc and reference counting

2014-05-14 Thread Dmitry Olshansky via Digitalmars-d

14-May-2014 17:16, Dicebot пишет:

On Tuesday, 13 May 2014 at 21:16:55 UTC, Timon Gehr wrote:

On 05/13/2014 09:07 PM, Jacob Carlborg wrote:

On 2014-05-13 15:56, Dicebot wrote:


Judging by http://static.rust-lang.org/doc/0.6/tutorial-macros.html
those are not full-blown AST macros like ones you have been proposing,
more like hygienic version of C macros.


Hmm, I haven't looked at Rust macros that much.



Again, the following is an example of Rust macros in action. A bf
program is compiled to Rust code at compile time.

https://github.com/huonw/brainfuck_macro/blob/master/lib.rs

Compile-time computations create an AST which is then spliced. Seems
full-blown enough to me and not at all like C macros.


Comments look promising but unfortunately I can't understand a single
line of actual code :) Mine rust-fu is too weak.


This is curious:
http://burntsushi.net/rustdoc/regex/

--
Dmitry Olshansky


Re: More radical ideas about gc and reference counting

2014-05-13 Thread Rainer Schuetze via Digitalmars-d



On 12.05.2014 13:53, Marc Schütz schue...@gmx.net wrote:


I'm surprised that you didn't include:

3. Thread-local GC, isolated zones (restricting where references to
objects of a particular heap can be placed), exempting certain threads
from GC completely, ...


This comes up from time to time, but to me it is very blurry how this 
can work in reality.


Considering how shared is supposed to be used to be useful (do some 
locking, then cast away shared) there is no guarantee by the language 
that any object is actually thread local (no references from other 
threads). Working with immutable (e.g. strings) is shared by design.


Re: More radical ideas about gc and reference counting

2014-05-13 Thread Rainer Schuetze via Digitalmars-d



On 13.05.2014 00:15, Martin Nowak wrote:

On 05/11/2014 08:18 PM, Rainer Schuetze wrote:


1. Use a scheme that takes a snapshot of the heap, stack and registers
at the moment of collection and do the actual collection in another
thread/process while the application can continue to run. This is the
way Leandro Lucarellas concurrent GC works
(http://dconf.org/2013/talks/lucarella.html), but it relies on fork
that doesn't exist on every OS/architecture. A manual copy of the memory
won't scale to very large memory, though it might be compressed to
possible pointers. Worst case it will need twice as much memory as the
current heap.


There is a problem with this scheme, copy-on-write is extremely
expensive when a mutation happens. That's one page fault (context
switch) + copying a whole page + mapping the new page.


I agree that this might be critical, but it is a one time cost per page. 
It seems unrealistic to do this with user mode exceptions, but the OS 
should have this optimized pretty well.


 It's much worse
 with huge pages (2MB page size).

How common are huge pages nowadays?


Re: More radical ideas about gc and reference counting

2014-05-13 Thread via Digitalmars-d

On Tuesday, 13 May 2014 at 06:12:46 UTC, Rainer Schuetze wrote:



On 13.05.2014 00:15, Martin Nowak wrote:



There is a problem with this scheme, copy-on-write is extremely
expensive when a mutation happens. That's one page fault 
(context

switch) + copying a whole page + mapping the new page.


I agree that this might be critical, but it is a one time cost 
per page. It seems unrealistic to do this with user mode 
exceptions, but the OS should have this optimized pretty well.


As I pointed out this won't help dynamic games that easily can 
touch 5 pages per frame if you use a single global allocator. 
2000 cycles * 50K = 100K = frame drop. What's worse, if you are 
low on memory you will start to swap to disk (or compress pages).


So that means you have to optimize for collections, use dedicated 
allocators that keep dynamic data on the same pages etc... 
Basically you get the disadvantage of manual memory management 
and in the worst case the memory requirements of a copying GC 
without the benefits...


Re: More radical ideas about gc and reference counting

2014-05-13 Thread via Digitalmars-d

On Tuesday, 13 May 2014 at 06:06:40 UTC, Rainer Schuetze wrote:



On 12.05.2014 13:53, Marc Schütz schue...@gmx.net wrote:


I'm surprised that you didn't include:

3. Thread-local GC, isolated zones (restricting where 
references to
objects of a particular heap can be placed), exempting certain 
threads

from GC completely, ...


This comes up from time to time, but to me it is very blurry 
how this can work in reality.


Considering how shared is supposed to be used to be useful 
(do some locking, then cast away shared) there is no 
guarantee by the language that any object is actually thread 
local (no references from other threads). Working with 
immutable (e.g. strings) is shared by design.


Yes, but only a part of the data is shared. I suspect the 
majority of the data in typical programs will be thread-local. If 
you use a message passing model, you can improve that even 
further (though it requires a way to move an object to another 
thread's heap). This way, you can - in the best case - avoid the 
shared heap completely.


Re: More radical ideas about gc and reference counting

2014-05-13 Thread Wyatt via Digitalmars-d

On Tuesday, 13 May 2014 at 06:06:40 UTC, Rainer Schuetze wrote:


This comes up from time to time, but to me it is very blurry 
how this can work in reality.


The paper I linked on Friday [0] presents a collector like this.  
Are there concerns I've missed that make that not applicable?


Considering how shared is supposed to be used to be useful 
(do some locking, then cast away shared) there is no 
guarantee by the language that any object is actually thread 
local (no references from other threads). Working with 
immutable (e.g. strings) is shared by design.


I'm not seeing much in the documentation, but from what I can 
tell (per the FAQ), shared in D just guarantees it's on the 
global heap?


-Wyatt

[0] 
https://research.microsoft.com/en-us/um/people/simonpj/papers/parallel/local-gc.pdf


Re: More radical ideas about gc and reference counting

2014-05-13 Thread Dicebot via Digitalmars-d

On Monday, 12 May 2014 at 19:32:49 UTC, Jacob Carlborg wrote:

On 2014-05-12 19:14, Dicebot wrote:

It lacks any good static reflection though. And this stuff is 
damn

addictive when you try it of D caliber.


It has macros, that basically requires great support for static 
reflection to be usable.


Judging by 
http://static.rust-lang.org/doc/0.6/tutorial-macros.html those 
are not full-blown AST macros like ones you have been proposing, 
more like hygienic version of C macros.


Re: More radical ideas about gc and reference counting

2014-05-13 Thread Steven Schveighoffer via Digitalmars-d
On Tue, 13 May 2014 02:12:44 -0400, Rainer Schuetze r.sagita...@gmx.de  
wrote:





On 13.05.2014 00:15, Martin Nowak wrote:

On 05/11/2014 08:18 PM, Rainer Schuetze wrote:


1. Use a scheme that takes a snapshot of the heap, stack and registers
at the moment of collection and do the actual collection in another
thread/process while the application can continue to run. This is the
way Leandro Lucarellas concurrent GC works
(http://dconf.org/2013/talks/lucarella.html), but it relies on fork
that doesn't exist on every OS/architecture. A manual copy of the  
memory

won't scale to very large memory, though it might be compressed to
possible pointers. Worst case it will need twice as much memory as the
current heap.


There is a problem with this scheme, copy-on-write is extremely
expensive when a mutation happens. That's one page fault (context
switch) + copying a whole page + mapping the new page.


I agree that this might be critical, but it is a one time cost per page.  
It seems unrealistic to do this with user mode exceptions, but the OS  
should have this optimized pretty well.


  It's much worse
  with huge pages (2MB page size).

How common are huge pages nowadays?


I know this is coming from a position of extreme ignorance, but why do we  
have to do copy on write? What about pause on write? In other words, if a  
thread tries to write to a page that's being used by the collector, it  
pauses the thread until the page is no longer being used by the GC.


This doesn't fix everything, but it's at least as good as today's GC,  
which preemptively pauses threads.


My ignorance is that I have no idea if this is even possible, and I also  
have no idea how it would affect performance.


-Steve


Re: More radical ideas about gc and reference counting

2014-05-13 Thread Timon Gehr via Digitalmars-d

On 05/13/2014 03:56 PM, Dicebot wrote:

On Monday, 12 May 2014 at 19:32:49 UTC, Jacob Carlborg wrote:

On 2014-05-12 19:14, Dicebot wrote:


It lacks any good static reflection though. And this stuff is damn
addictive when you try it of D caliber.


It has macros, that basically requires great support for static
reflection to be usable.


Judging by http://static.rust-lang.org/doc/0.6/tutorial-macros.html
those are not full-blown AST macros like ones you have been proposing,
more like hygienic version of C macros.


https://github.com/huonw/brainfuck_macro/blob/master/lib.rs


Re: More radical ideas about gc and reference counting

2014-05-13 Thread via Digitalmars-d
On Tuesday, 13 May 2014 at 14:13:31 UTC, Steven Schveighoffer 
wrote:
I know this is coming from a position of extreme ignorance, but 
why do we have to do copy on write? What about pause on write?


Not sure how that will help? Pointers may still escape collection?

(but you get that with transactional memory, on the cache level)


Re: More radical ideas about gc and reference counting

2014-05-13 Thread via Digitalmars-d
On Tuesday, 13 May 2014 at 16:14:07 UTC, Ola Fosheim Grøstad 
wrote:
On Tuesday, 13 May 2014 at 14:13:31 UTC, Steven Schveighoffer 
wrote:
Not sure how that will help? Pointers may still escape 
collection?


(as in not being traced, leading to freeing live memory)


Re: More radical ideas about gc and reference counting

2014-05-13 Thread Steven Schveighoffer via Digitalmars-d
On Tue, 13 May 2014 12:14:06 -0400, Ola Fosheim Grøstad  
ola.fosheim.grostad+dl...@gmail.com wrote:



On Tuesday, 13 May 2014 at 14:13:31 UTC, Steven Schveighoffer wrote:
I know this is coming from a position of extreme ignorance, but why do  
we have to do copy on write? What about pause on write?


Not sure how that will help? Pointers may still escape collection?

(but you get that with transactional memory, on the cache level)


My understanding is that the way the fork collector works is that it makes  
pages copy-on-write. Then if the original process writes to one of the  
pages, the page is copied, which may be expensive in terms of total memory  
and time consumed.


The idea I had was to make them pause-on-write. This means, when the  
original process attempts to write to the page, it gets a page-fault,  
which pauses the thread until the collector is done with it. This causes  
the same halting that normally happens with stop-the-world, but only  
on-demand, instead of preemptively. If a thread is doing only reads, or is  
only touching non-Scanned memory, it continues.


A collector may be able to take advantage of this knowledge to avoid as  
many pauses as possible, but I'm not sure.


Just an idea, I make no claims to its actual benefits :)

-Steve


Re: More radical ideas about gc and reference counting

2014-05-13 Thread Steven Schveighoffer via Digitalmars-d
On Tue, 13 May 2014 13:36:17 -0400, Daniel Murphy  
yebbliesnos...@gmail.com wrote:


Steven Schveighoffer  wrote in message  
news:op.xfs6jhp3eav7ka@stevens-macbook-pro-2.local...


If a thread is doing only reads, or is  only touching non-Scanned  
memory, it continues.


How long do threads usually go without touching the stack?


The stack would have to be COW.

-Steve


Re: More radical ideas about gc and reference counting

2014-05-13 Thread Steven Schveighoffer via Digitalmars-d
On Tue, 13 May 2014 13:37:05 -0400, Steven Schveighoffer  
schvei...@yahoo.com wrote:


On Tue, 13 May 2014 13:36:17 -0400, Daniel Murphy  
yebbliesnos...@gmail.com wrote:


Steven Schveighoffer  wrote in message  
news:op.xfs6jhp3eav7ka@stevens-macbook-pro-2.local...


If a thread is doing only reads, or is  only touching non-Scanned  
memory, it continues.


How long do threads usually go without touching the stack?


The stack would have to be COW.


No sorry, the stack only contains roots, so it can be marked as safe to  
use pretty quickly.


-Steve


Re: More radical ideas about gc and reference counting

2014-05-13 Thread Daniel Murphy via Digitalmars-d
Steven Schveighoffer  wrote in message 
news:op.xfs6jhp3eav7ka@stevens-macbook-pro-2.local...


If a thread is doing only reads, or is  only touching non-Scanned memory, 
it continues.


How long do threads usually go without touching the stack? 



Re: More radical ideas about gc and reference counting

2014-05-13 Thread Jacob Carlborg via Digitalmars-d

On 2014-05-13 15:56, Dicebot wrote:


Judging by http://static.rust-lang.org/doc/0.6/tutorial-macros.html
those are not full-blown AST macros like ones you have been proposing,
more like hygienic version of C macros.


Hmm, I haven't looked at Rust macros that much.

--
/Jacob Carlborg


Re: More radical ideas about gc and reference counting

2014-05-13 Thread via Digitalmars-d
On Tuesday, 13 May 2014 at 17:37:05 UTC, Steven Schveighoffer 
wrote:
On Tue, 13 May 2014 13:36:17 -0400, Daniel Murphy 
yebbliesnos...@gmail.com wrote:


Steven Schveighoffer  wrote in message 
news:op.xfs6jhp3eav7ka@stevens-macbook-pro-2.local...


If a thread is doing only reads, or is  only touching 
non-Scanned memory, it continues.


How long do threads usually go without touching the stack?


The stack would have to be COW.


When you fork a process, all of its pages are COW. What you 
propose is much more complicated. You would either first need to 
fork(), then somehow remap certain pages as shared. The remapping 
would have to happen in the parent process. I don't know whether 
there are even interfaces to do this.


Re: More radical ideas about gc and reference counting

2014-05-13 Thread via Digitalmars-d
On Tuesday, 13 May 2014 at 17:22:19 UTC, Steven Schveighoffer 
wrote:
The idea I had was to make them pause-on-write. This means, 
when the original process attempts to write to the page, it 
gets a page-fault, which pauses the thread until the collector 
is done with it. This causes the same halting that normally 
happens with stop-the-world, but only on-demand, instead of 
preemptively. If a thread is doing only reads, or is only 
touching non-Scanned memory, it continues.


That would require a separate page table and would freeze threads 
real fast for global GC. It can work for local heaps like caches? 
I guess you could set up shared memory between two processes and 
use that as your heap.



Just an idea, I make no claims to its actual benefits :)


Another idea is to use transactions and roll back the collector 
if someone does a write, for small heaps with few writes... 
Principle: run the collector when a core is idle and yield to 
cores that do real work.


But I dont think current gen CPUs can handle big transactions...




Re: More radical ideas about gc and reference counting

2014-05-13 Thread Rainer Schuetze via Digitalmars-d



On 13.05.2014 13:09, Marc Schütz schue...@gmx.net wrote:

On Tuesday, 13 May 2014 at 06:06:40 UTC, Rainer Schuetze wrote:



On 12.05.2014 13:53, Marc Schütz schue...@gmx.net wrote:


I'm surprised that you didn't include:

3. Thread-local GC, isolated zones (restricting where references to
objects of a particular heap can be placed), exempting certain threads
from GC completely, ...


This comes up from time to time, but to me it is very blurry how this
can work in reality.

Considering how shared is supposed to be used to be useful (do some
locking, then cast away shared) there is no guarantee by the
language that any object is actually thread local (no references from
other threads). Working with immutable (e.g. strings) is shared by
design.


Yes, but only a part of the data is shared. I suspect the majority of
the data in typical programs will be thread-local. If you use a message
passing model, you can improve that even further (though it requires a
way to move an object to another thread's heap). This way, you can - in
the best case - avoid the shared heap completely.


Possible, but this is with a language without shared data (including 
immutable), not D. Safely transferring a large object tree from one 
thread to another will be very expensive. If you try to optimize that, 
the D compiler won't help you to guarantee memory safety.


Actually I don't have much experience with shared (I usually use 
__gshared if I need a shared global). Maybe I understand the idea better 
if you show what happens with this code when run with thread local GC:


class C { C next; }

shared(C) list;

C newC()
{
return new C; // allocated on the local heap?
}   

void prependToList(C c)
{
synchronized(list)
{
C lst = cast(C) list;
c.next = lst;  // anything special happening here?
list = cast(shared(C)) c;  // and here?
}
}

void callMeFromMultipleThreads()
{
prependToList(newC());
}


Re: More radical ideas about gc and reference counting

2014-05-13 Thread Rainer Schuetze via Digitalmars-d



On 13.05.2014 14:20, Wyatt wrote:

On Tuesday, 13 May 2014 at 06:06:40 UTC, Rainer Schuetze wrote:


This comes up from time to time, but to me it is very blurry how this
can work in reality.


The paper I linked on Friday [0] presents a collector like this. Are
there concerns I've missed that make that not applicable?


I just read the first chapters, and according to that, existing local 
gcs needs write barriers, so we are back to my second proposal. The 
implementation in the paper even adds read barriers.





Considering how shared is supposed to be used to be useful (do some
locking, then cast away shared) there is no guarantee by the
language that any object is actually thread local (no references from
other threads). Working with immutable (e.g. strings) is shared by
design.


I'm not seeing much in the documentation, but from what I can tell (per
the FAQ), shared in D just guarantees it's on the global heap?


To me, shared is a type modifier that doesn't implicitely convert to 
anything else without casting.
If a global/static variable is shared, it is placed into the data 
segment of the executable image, not TLS. The heap is not involved here.




-Wyatt

[0]
https://research.microsoft.com/en-us/um/people/simonpj/papers/parallel/local-gc.pdf



Re: More radical ideas about gc and reference counting

2014-05-13 Thread Timon Gehr via Digitalmars-d

On 05/13/2014 09:07 PM, Jacob Carlborg wrote:

On 2014-05-13 15:56, Dicebot wrote:


Judging by http://static.rust-lang.org/doc/0.6/tutorial-macros.html
those are not full-blown AST macros like ones you have been proposing,
more like hygienic version of C macros.


Hmm, I haven't looked at Rust macros that much.



Again, the following is an example of Rust macros in action. A bf 
program is compiled to Rust code at compile time.


https://github.com/huonw/brainfuck_macro/blob/master/lib.rs

Compile-time computations create an AST which is then spliced. Seems 
full-blown enough to me and not at all like C macros.


Re: More radical ideas about gc and reference counting

2014-05-13 Thread Tommi via Digitalmars-d

On Tuesday, 13 May 2014 at 21:16:55 UTC, Timon Gehr wrote:

On 05/13/2014 09:07 PM, Jacob Carlborg wrote:

On 2014-05-13 15:56, Dicebot wrote:

Judging by 
http://static.rust-lang.org/doc/0.6/tutorial-macros.html
those are not full-blown AST macros like ones you have been 
proposing,

more like hygienic version of C macros.


Hmm, I haven't looked at Rust macros that much.



Again, the following is an example of Rust macros in action. A 
bf program is compiled to Rust code at compile time.


https://github.com/huonw/brainfuck_macro/blob/master/lib.rs

Compile-time computations create an AST which is then spliced. 
Seems full-blown enough to me and not at all like C macros.


I really don't know, but understanding is that there are two 
types of macros in Rust. There's the simple hygienic macro one, 
which has some documentation: 
http://static.rust-lang.org/doc/0.10/guide-macros.html. Then 
there's the other, newer kind of macros (they may be called 
procedural macros... or not) and those are basically 
undocumented at this point. My understanding is that the newer 
kind of macros can do literally anything, like order you a pizza 
at compile-time.


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Manu via Digitalmars-d
On 12 May 2014 10:50, Walter Bright via Digitalmars-d
digitalmars-d@puremagic.com wrote:

 They are using Rust to write a safe and performant web browser while
 developing
 the language.


 Sure. But that browser hasn't been released yet. Consider that I've written
 safe and performant code in D, but others tell me I am unique and that I
 cannot expect average programmers to get it right.

 I repeatedly point out to Manu that he can write performant code in D that
 does not suffer from GC stalls, and he repeatedly replies that he has to
 work with average programmers who are not capable of doing this.

What? You've never offered me a practical solution.

You tell me I have to sacrifice any dependency on libraries
(ridiculous), and all the modern conveniences and safety of automatic
memory management to do it!
Indeed, average programmers are a real-life practical problem, and
it's not just them, I also appreciate the convenience offered
personally. I only have one life, I *really* appreciate saving time on
mundane and otherwise inconsequential tasks.
Tell me some other reasons why I would be attracted to D? Take that
all away, and what's the point? Automating some boilerplate is nice,
but it's not the motivating reason for a wholesale adoption.

You haven't told me how I can use the GC (or whatever memory
management scheme, I really don't care) in the low frequency code
(again, read: almost all code ever), and not have it interfere with
the high frequency code.
This is the fundamental problem with the GC. I can't use it
***ANYWHERE***, including any libraries I link. You can't isolate a
GC, it's effects are not localised. If you could, maybe it'd be more
workable... but even if it were properly concurrent and didn't halt
the realtime threads when it collected, it's still totally impractical
for any background thread to freeze for 10s-100s of ms while it runs a
collect because I received a network packet which needs to be sent
somewhere for processing or whatever.

What do I do?


 So while I have no doubt that the Mozilla team may be very effective at
 using Rust and making it shine, that may not be transferable to the larger
 community.


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Rainer Schuetze via Digitalmars-d



On 12.05.2014 06:57, Ola Fosheim Grøstad 
ola.fosheim.grostad+dl...@gmail.com wrote:

On Sunday, 11 May 2014 at 20:45:08 UTC, Rainer Schuetze wrote:



On 11.05.2014 22:33, Walter Bright wrote:



The Boehm collector cannot move objects around, the D one can.

Oh it can? Really?


Yes. D, for example, requires that objects not be self-referential for
this reason.


I don't think the GC would have problems with fixing up internal
pointers to the object itself. self-referential is prohibited to allow
moving structures by memcpy, e.g. as return value.


Does this mean that you cannot safely implement something as basic as a
circular linked list?


Depends on your implementation, but I would not expect that the nodes of 
your linked list could be subject to being moved (mostly happens on the 
stack AFAICT), as pointers to the node from other nodes would become 
invalid, too.


Re: More radical ideas about gc and reference counting

2014-05-12 Thread ponce via Digitalmars-d

On Monday, 12 May 2014 at 00:44:54 UTC, Andrei Alexandrescu wrote:

On 5/11/14, 2:49 PM, ponce wrote:

On Sunday, 11 May 2014 at 21:43:06 UTC, sclytrack wrote:


There is very little use of @, it's mostly   and ~. 
Heck I
didn't find any @ while casually browsing the code. It's like 
they are

not using it at all.



Similarly in current C++ std::shared_ptr is barely there while
std::unique_ptr is all over the place.


The right tally here would include bald pointers as well. Also, 
I'm quite surprised by the confidence of this assertion. -- 
Andrei


Well I wa'm probably much biased indeed. I have no data.


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Dicebot via Digitalmars-d
On Monday, 12 May 2014 at 07:12:29 UTC, Manu via Digitalmars-d 
wrote:

You haven't told me how I can use the GC (or whatever memory
management scheme, I really don't care) in the low frequency 
code
(again, read: almost all code ever), and not have it interfere 
with

the high frequency code.


You will like Don's talk this year ;)
Game dev code in not just any low level code though.


Re: More radical ideas about gc and reference counting

2014-05-12 Thread ponce via Digitalmars-d

On Monday, 12 May 2014 at 00:44:54 UTC, Andrei Alexandrescu wrote:

On 5/11/14, 2:49 PM, ponce wrote:

On Sunday, 11 May 2014 at 21:43:06 UTC, sclytrack wrote:


There is very little use of @, it's mostly   and ~. 
Heck I
didn't find any @ while casually browsing the code. It's like 
they are

not using it at all.



Similarly in current C++ std::shared_ptr is barely there while
std::unique_ptr is all over the place.


The right tally here would include bald pointers as well. Also, 
I'm quite surprised by the confidence of this assertion. -- 
Andrei


Well I was talking with no data indeed.


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Manu via Digitalmars-d
On 12 May 2014 17:24, Dicebot via Digitalmars-d
digitalmars-d@puremagic.com wrote:
 On Monday, 12 May 2014 at 07:12:29 UTC, Manu via Digitalmars-d wrote:

 You haven't told me how I can use the GC (or whatever memory
 management scheme, I really don't care) in the low frequency code
 (again, read: almost all code ever), and not have it interfere with
 the high frequency code.


 You will like Don's talk this year ;)

I'm super-disappointed I can't make it this year! We were evicted from
our house, have to move, and I can't bail for a week and leave that
all on my mrs while she kicks along the fulltime job :(

 Game dev code in not just any low level code though.

It's not just low-level code. It's also very high level code. Gamedev
is a unique (and extremely interesting) field which combines and
tightly integrates almost every imaginable field of software
engineering. I can't think of any that are left out, or any other
industries that also share this property.
I argue, if you can make a modern high-end game in a language, it can
do anything.


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Tommi via Digitalmars-d

On Monday, 12 May 2014 at 00:50:24 UTC, Walter Bright wrote:

On 5/11/2014 1:59 PM, Timon Gehr wrote:
Borrowed pointers are not even superficially similar to near*. 
They are
compatible with everything else, because they can store data 
that was borrowed

from anywhere else.


As long as those pointers don't escape. Am I right in that one 
cannot store a borrowed pointer into a global data structure?


Perhaps:

struct Test {
n: 'static int, // [1]
m: int
}

static val: int = 123;
static mut t: Test = Test { n: 'static val, m: 0 };

fn main() {
unsafe { // [2]
let p = mut t.m;
*p = 456;
println!({} {}, *t.n, t.m); // prints: 123 456
}
}

[1]: In order to create a static instance of 'Test', the 'n' 
field (which is a borrowed pointer) must be specified as to be 
pointing at a static immutable (int) variable.


[2]: Any use of static mutable data requires the use of an 
'unsafe' block (similar to @trusted in D)


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Tommi via Digitalmars-d

On Monday, 12 May 2014 at 08:10:43 UTC, Tommi wrote:

Perhaps: [..]


Somewhat surprisingly to me, you can later on change the borrowed 
pointer in the mutable static 'Test' to point at a mutable static 
int:


struct Test {
n: 'static int
}

static old: int = 111;
static mut new: int = 222;
static mut t: Test = Test { n: 'static old };

fn main() {
unsafe {
println!({}, *t.n); // prints: 111
t.n = new; // Can point to a different static
println!({}, *t.n); // prints: 222
// ...but can't point to a local, e.g:
// let v = 123;
// t.n = v; // error: `v` does not live long enough
new = 333;
println!({}, *t.n); // prints: 333
}
}


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Walter Bright via Digitalmars-d

On 5/12/2014 12:12 AM, Manu via Digitalmars-d wrote:

What? You've never offered me a practical solution.


I have, you've just rejected them.



What do I do?


1. you can simply do C++ style memory management. shared_ptr, etc.

2. you can have the non-pausible code running in a thread that is not registered 
with the gc, so the gc won't pause it. This requires that this thread not 
allocate gc memory, but it can use gc memory allocated by other threads, as long 
as those other threads retain a root to it.


3. D allows you to create and use any memory management scheme you want. You are 
simply not locked into GC. For example, I rewrote my Empire game into D and it 
did not do any allocation at all - no GC, not even malloc. I know that you'll 
need to do allocation, I'm just pointing out that GC allocations and pauses are 
hardly inevitable.


4. for my part, I have implemented @nogc so you can track down gc usage in code. 
I have also been working towards refactoring Phobos to eliminate unnecessary GC 
allocations and provide alternatives that do not allocate GC memory. 
Unfortunately, these PR's just sit there.


5. you can divide your app into multiple processes that communicate via 
interprocess communication. One of them pausing will not pause the others. You 
can even do things like turn off the GC collections in those processes, and when 
they run out of memory just kill them and restart them. (This is not an absurd 
idea, I've heard of people doing that effectively.)


6. If you call C++ libs, they won't be allocating memory with the D GC. D code 
can call C++ code. If you run those C++ libs in separate threads, they won't get 
paused, either (see (2)).


7. The Warp program I wrote avoids GC pauses by allocating ephemeral memory with 
malloc/free, and (ironically) only using GC for persistent data structures that 
should never be free'd. Then, I just turned off GC collections, because they'd 
never free anything anyway.


8. you can disable and enable collections, and you can cause collections to be 
run at times when nothing is happening (like when the user has not input 
anything for a while).



The point is, the fact that D has 'new' that allocates GC memory simply does not 
mean you are obliged to use it. The GC is not going to pause your program if you 
don't allocate with it. Nor will it ever run a collection at uncontrollable, 
random, asynchronous times.


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Walter Bright via Digitalmars-d

On 5/11/2014 10:57 PM, Marco Leise wrote:

Am Sun, 11 May 2014 17:50:25 -0700
schrieb Walter Bright newshou...@digitalmars.com:


As long as those pointers don't escape. Am I right in that one cannot store a
borrowed pointer into a global data structure?


Right, and that's the point and entirely positive-to-do™.


This means that a global data structure in Rust has to decide what memory 
allocation scheme its contents must use, and cannot (without tagging) mix memory 
allocation schemes.


For example, let's say a compiler has internally a single hash table of strings. 
With a GC, those strings can be statically allocated, or on the GC heap, or 
anything with a lifetime longer than the table's. But I don't see how this could 
work in Rust.




Re: More radical ideas about gc and reference counting

2014-05-12 Thread bearophile via Digitalmars-d

Walter Bright:


But I don't see how this could work in Rust.


Ask it to competent Rust developers/programmers.

Bye,
bearophile


Re: More radical ideas about gc and reference counting

2014-05-12 Thread John Colvin via Digitalmars-d

On Monday, 12 May 2014 at 08:45:56 UTC, Walter Bright wrote:

On 5/12/2014 12:12 AM, Manu via Digitalmars-d wrote:

What? You've never offered me a practical solution.


I have, you've just rejected them.



What do I do?


1. you can simply do C++ style memory management. shared_ptr, 
etc.


2. you can have the non-pausible code running in a thread that 
is not registered with the gc, so the gc won't pause it. This 
requires that this thread not allocate gc memory, but it can 
use gc memory allocated by other threads, as long as those 
other threads retain a root to it.


3. D allows you to create and use any memory management scheme 
you want. You are simply not locked into GC. For example, I 
rewrote my Empire game into D and it did not do any allocation 
at all - no GC, not even malloc. I know that you'll need to do 
allocation, I'm just pointing out that GC allocations and 
pauses are hardly inevitable.


4. for my part, I have implemented @nogc so you can track down 
gc usage in code. I have also been working towards refactoring 
Phobos to eliminate unnecessary GC allocations and provide 
alternatives that do not allocate GC memory. Unfortunately, 
these PR's just sit there.


5. you can divide your app into multiple processes that 
communicate via interprocess communication. One of them pausing 
will not pause the others. You can even do things like turn off 
the GC collections in those processes, and when they run out of 
memory just kill them and restart them. (This is not an absurd 
idea, I've heard of people doing that effectively.)


6. If you call C++ libs, they won't be allocating memory with 
the D GC. D code can call C++ code. If you run those C++ libs 
in separate threads, they won't get paused, either (see (2)).


7. The Warp program I wrote avoids GC pauses by allocating 
ephemeral memory with malloc/free, and (ironically) only using 
GC for persistent data structures that should never be free'd. 
Then, I just turned off GC collections, because they'd never 
free anything anyway.


8. you can disable and enable collections, and you can cause 
collections to be run at times when nothing is happening (like 
when the user has not input anything for a while).



The point is, the fact that D has 'new' that allocates GC 
memory simply does not mean you are obliged to use it. The GC 
is not going to pause your program if you don't allocate with 
it. Nor will it ever run a collection at uncontrollable, 
random, asynchronous times.


The only solutions to the libraries problem that I can see here 
require drastic separation of calls to said libraries from any 
even vaguely time critical code. This is quite restrictive.


Yes, calling badly optimised libraries from a hot loop is a bad 
idea anyway, but the GC changes this from


well it might take a little more time than usual, but we can 
spare a few nano-seconds and it'll show up easily in the profiler


to

it might, sometimes, cause the GC to run a full collection on 
our 3.96 / 4.00 GB heap with an associated half-second pause.


And here we go again, I can't use that library, it's memory 
management scheme is incompatible with my needs, I'll have to 
rewrite it myself...


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Dicebot via Digitalmars-d

On Monday, 12 May 2014 at 09:05:39 UTC, John Colvin wrote:

On Monday, 12 May 2014 at 08:45:56 UTC, Walter Bright wrote:
The only solutions to the libraries problem that I can see here 
require drastic separation of calls to said libraries from any 
even vaguely time critical code. This is quite restrictive.


I think this is more of library writing culture problem than 
engineering problem. High quality library shouldn't rely on any 
internal allocations at all, deferring this decision to user 
code. Otherwise you will eventually have problems, GC or not.


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Paulo Pinto via Digitalmars-d

On Monday, 12 May 2014 at 09:05:39 UTC, John Colvin wrote:

On Monday, 12 May 2014 at 08:45:56 UTC, Walter Bright wrote:

On 5/12/2014 12:12 AM, Manu via Digitalmars-d wrote:

What? You've never offered me a practical solution.


I have, you've just rejected them.



What do I do?


1. you can simply do C++ style memory management. 
shared_ptr, etc.


2. you can have the non-pausible code running in a thread that 
is not registered with the gc, so the gc won't pause it. This 
requires that this thread not allocate gc memory, but it can 
use gc memory allocated by other threads, as long as those 
other threads retain a root to it.


3. D allows you to create and use any memory management scheme 
you want. You are simply not locked into GC. For example, I 
rewrote my Empire game into D and it did not do any allocation 
at all - no GC, not even malloc. I know that you'll need to do 
allocation, I'm just pointing out that GC allocations and 
pauses are hardly inevitable.


4. for my part, I have implemented @nogc so you can track down 
gc usage in code. I have also been working towards refactoring 
Phobos to eliminate unnecessary GC allocations and provide 
alternatives that do not allocate GC memory. Unfortunately, 
these PR's just sit there.


5. you can divide your app into multiple processes that 
communicate via interprocess communication. One of them 
pausing will not pause the others. You can even do things like 
turn off the GC collections in those processes, and when they 
run out of memory just kill them and restart them. (This is 
not an absurd idea, I've heard of people doing that 
effectively.)


6. If you call C++ libs, they won't be allocating memory with 
the D GC. D code can call C++ code. If you run those C++ libs 
in separate threads, they won't get paused, either (see (2)).


7. The Warp program I wrote avoids GC pauses by allocating 
ephemeral memory with malloc/free, and (ironically) only using 
GC for persistent data structures that should never be free'd. 
Then, I just turned off GC collections, because they'd never 
free anything anyway.


8. you can disable and enable collections, and you can cause 
collections to be run at times when nothing is happening (like 
when the user has not input anything for a while).



The point is, the fact that D has 'new' that allocates GC 
memory simply does not mean you are obliged to use it. The GC 
is not going to pause your program if you don't allocate with 
it. Nor will it ever run a collection at uncontrollable, 
random, asynchronous times.


The only solutions to the libraries problem that I can see here 
require drastic separation of calls to said libraries from any 
even vaguely time critical code. This is quite restrictive.


Yes, calling badly optimised libraries from a hot loop is a bad 
idea anyway, but the GC changes this from


well it might take a little more time than usual, but we can 
spare a few nano-seconds and it'll show up easily in the 
profiler


to

it might, sometimes, cause the GC to run a full collection on 
our 3.96 / 4.00 GB heap with an associated half-second pause.


And here we go again, I can't use that library, it's memory 
management scheme is incompatible with my needs, I'll have to 
rewrite it myself...


A badly placed malloc() in library code can also trigger OS
virtualization mechanisms and make processes being swapped out to
disk, with the respective overhead in disk access and time spent
on kernel code.

So it is just not the we can spare a few nano-seconds.

--
Paulo


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Marco Leise via Digitalmars-d
Am Sun, 11 May 2014 22:11:28 -0700
schrieb Walter Bright newshou...@digitalmars.com:

  But I thought ARC cannot be designed without GC to resolve
  cycles.
 
 It can be, there are various schemes to deal with that, including don't 
 create 
 cycles. GC is just one of them.
 
 http://en.wikipedia.org/wiki/Reference_counting#Dealing_with_reference_cycles

Yes that article mentions:
a) avoid creating them
b) explicitly forbid reference cycles
c) Judicious use of weak references
d) manually track that data structure's lifetime
e) tracing garbage collector
f) adding to a root list all objects whose reference
   count is decremented to a non-zero value and periodically
   searching all objects reachable from those roots.

To pick up your statement again: »Your proposal still relies
on a GC to provide the memory safety, […] it is a hybrid
ARC/GC system.«

a) and b) let's assume never creating cycles is not a feasible
  option in a systems programming language
c) and d) don't provide said memory safety
e) and f) ARE tracing garbage collectors

ergo: »But I thought ARC cannot be designed without GC to
resolve cycles.«

Your were arguing against Michel Fortin's proposal on the
surface, when your requirement cannot even be fulfilled
theoretically it seems. Which could mean that you don't like
the idea of replacing D's GC with an ARC solution.

»This is the best horse I could find for the price. It is
 pretty fast and ...«
»No, it still has four legs.«

-- 
Marco



Re: More radical ideas about gc and reference counting

2014-05-12 Thread Marco Leise via Digitalmars-d
Am Mon, 12 May 2014 01:54:58 -0700
schrieb Walter Bright newshou...@digitalmars.com:

 On 5/11/2014 10:57 PM, Marco Leise wrote:
  Am Sun, 11 May 2014 17:50:25 -0700
  schrieb Walter Bright newshou...@digitalmars.com:
 
  As long as those pointers don't escape. Am I right in that one cannot 
  store a
  borrowed pointer into a global data structure?
 
  Right, and that's the point and entirely positive-to-do™.
 
 This means that a global data structure in Rust has to decide what memory 
 allocation scheme its contents must use, and cannot (without tagging) mix 
 memory 
 allocation schemes.
 
 For example, let's say a compiler has internally a single hash table of 
 strings. 
 With a GC, those strings can be statically allocated, or on the GC heap, or 
 anything with a lifetime longer than the table's. But I don't see how this 
 could 
 work in Rust.

:( Good question. I have no idea.

-- 
Marco



Re: More radical ideas about gc and reference counting

2014-05-12 Thread Tommi via Digitalmars-d

On Sunday, 11 May 2014 at 21:43:06 UTC, sclytrack wrote:

I like this owner/unique, borrow thing.

@ is managed (currently reference counted)
~ is owner
 is borrow


I like it too. But a few notes:

1) The managed pointer @T has been deprecated and you should use 
the standard library types GcT and RcT instead.


2) The owned pointer ~T has been largely removed from the 
language and you should use the standard library type BoxT 
instead.


The basic idea is that if a function needs to have ownership of 
its argument, the function should take its argument by value. And 
if the function doesn't need the ownership, it should take its 
argument either by a mutable or immutable reference (they don't 
like to call it borrowed pointer anymore, it's called simply a 
reference now). Owned types get moved by default when you pass 
them to a function that takes its argument by value. You call the 
'clone' method to make a copy of a variable of an owned type.


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Marco Leise via Digitalmars-d
Am Mon, 12 May 2014 09:32:58 +
schrieb Paulo Pinto pj...@progtools.org:

 On Monday, 12 May 2014 at 09:05:39 UTC, John Colvin wrote:
  On Monday, 12 May 2014 at 08:45:56 UTC, Walter Bright wrote:
  On 5/12/2014 12:12 AM, Manu via Digitalmars-d wrote:
  What? You've never offered me a practical solution.
 
  I have, you've just rejected them.
 
 
  What do I do?
 
  1. you can simply do C++ style memory management. 
  shared_ptr, etc.
 
  2. you can have the non-pausible code running in a thread that 
  is not registered with the gc, so the gc won't pause it. This 
  requires that this thread not allocate gc memory, but it can 
  use gc memory allocated by other threads, as long as those 
  other threads retain a root to it.
 
  3. D allows you to create and use any memory management scheme 
  you want. You are simply not locked into GC. For example, I 
  rewrote my Empire game into D and it did not do any allocation 
  at all - no GC, not even malloc. I know that you'll need to do 
  allocation, I'm just pointing out that GC allocations and 
  pauses are hardly inevitable.
 
  4. for my part, I have implemented @nogc so you can track down 
  gc usage in code. I have also been working towards refactoring 
  Phobos to eliminate unnecessary GC allocations and provide 
  alternatives that do not allocate GC memory. Unfortunately, 
  these PR's just sit there.
 
  5. you can divide your app into multiple processes that 
  communicate via interprocess communication. One of them 
  pausing will not pause the others. You can even do things like 
  turn off the GC collections in those processes, and when they 
  run out of memory just kill them and restart them. (This is 
  not an absurd idea, I've heard of people doing that 
  effectively.)
 
  6. If you call C++ libs, they won't be allocating memory with 
  the D GC. D code can call C++ code. If you run those C++ libs 
  in separate threads, they won't get paused, either (see (2)).
 
  7. The Warp program I wrote avoids GC pauses by allocating 
  ephemeral memory with malloc/free, and (ironically) only using 
  GC for persistent data structures that should never be free'd. 
  Then, I just turned off GC collections, because they'd never 
  free anything anyway.
 
  8. you can disable and enable collections, and you can cause 
  collections to be run at times when nothing is happening (like 
  when the user has not input anything for a while).
 
 
  The point is, the fact that D has 'new' that allocates GC 
  memory simply does not mean you are obliged to use it. The GC 
  is not going to pause your program if you don't allocate with 
  it. Nor will it ever run a collection at uncontrollable, 
  random, asynchronous times.
 
  The only solutions to the libraries problem that I can see here 
  require drastic separation of calls to said libraries from any 
  even vaguely time critical code. This is quite restrictive.
 
  Yes, calling badly optimised libraries from a hot loop is a bad 
  idea anyway, but the GC changes this from
 
  well it might take a little more time than usual, but we can 
  spare a few nano-seconds and it'll show up easily in the 
  profiler
 
  to
 
  it might, sometimes, cause the GC to run a full collection on 
  our 3.96 / 4.00 GB heap with an associated half-second pause.
 
  And here we go again, I can't use that library, it's memory 
  management scheme is incompatible with my needs, I'll have to 
  rewrite it myself...
 
 A badly placed malloc() in library code can also trigger OS
 virtualization mechanisms and make processes being swapped out to
 disk, with the respective overhead in disk access and time spent
 on kernel code.
 
 So it is just not the we can spare a few nano-seconds.
 
 --
 Paulo

Yes, it could easily extend to a longer wait. I think we all
know programs that hang while the system is swapping out.
Don't let it get to that! A PC game would typically reduce
caches or texture resolutions before running out of RAM.

Linux has a threshold of free pages it tries to keep available
at any time to satisfy occasional small allocations.
http://www.science.unitn.it/~fiorella/guidelinux/tlk/node39.html

All-in-all malloc is less likely to cause long pauses. It just
allocates and doesn't ask itself if there might be dead memory
to salvage to satisfy a request.

Time will tell if all well written D libraries will be @nogc
to move the question of allocations to the user.

-- 
Marco



Re: More radical ideas about gc and reference counting

2014-05-12 Thread John Colvin via Digitalmars-d

On Monday, 12 May 2014 at 10:51:33 UTC, Marco Leise wrote:


Time will tell if all well written D libraries will be @nogc
to move the question of allocations to the user.


If there was such a thing as weakly nogc then we would could 
do this


//some library function
void foo(OR, IR)(OR o, IR i) @weak-nogc
{
//take things from i and put them in o
}

allocations would be possible during the execution of foo, but 
*only* in the implementations of OR and IR, which means that the 
developer gets the control and guarantees they need, but doesn't 
have to explicitly pre-allocate (which might not even be 
possible).


I don't see how it would work with UFCS though...


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Timon Gehr via Digitalmars-d

On 05/12/2014 02:50 AM, Walter Bright wrote:

On 5/11/2014 1:59 PM, Timon Gehr wrote:

On 05/11/2014 10:05 PM, Walter Bright wrote:

That's clearly an additional benefit of the borrowed pointer notion. But
have you examined generated Rust code for the cost of inc/dec? I
haven't, but I don't see any way they could avoid this (very expensive)
cost without borrowed pointers.

Sure, but performance is the additional benefit.


One constant theme in this thread, one I find baffling, is the regular
dismissal of the performance implications of inc/dec.


Irrelevant, I'm not doing that.

(And again, reference counting is not the only allocation mechanism in 
Rust. AFAICT, most allocations use owned pointers.)



Borrowed pointers are not necessary to support raw pointers  - this can be (and 
is in some
systems) supported by simply wrapping the raw pointer with a dummy
reference count.
...


I have no idea what this part is trying to bring across.


The reason for borrowed pointers is performance.


No, it is safety. Raw pointers give you all of the performance.


Rust would be non-viable without them.



True in that it would fail to meet its design goals. Rust provides a 
tracing garbage collector as well, so it is at least as viable as D 
regarding performance of safe memory management. (Probably more, 
actually, because it does not provide precision-unfriendly constructs 
such as undiscriminated unions.)



I strongly suggest writing a snippet in [[insert your favorite proven
technology RC language here]] and disassembling the result, and have a
look at what inc/dec entails.
...


I don't have trouble seeing the cost of reference counting. (And it's 
you who claimed that this is going to be the only memory allocation 
scheme in use in Rust code.)





The thing is, if the compiler is capable of figuring out these lifetimes
by examining the code,


There are explicit lifetime annotations in function signatures.


Yes, because the compiler cannot figure it out itself, so the programmer
has to annotate.
...


You are saying 'if the compiler is capable of figuring out these 
lifetimes by examining the code, then ...' and then you are saying that 
the compiler is incapable of figuring them out itself. What is it that 
we are arguing here? Are you saying the Rust compiler should infer all 
memory management automatically or that it cannot possibly do that, or 
something else?





It is simply not true that type systems are inherently restricted to
checking
trivial properties. They can be made as strong as mathematical logic
without
much fuss.


Again, Rust would not need borrowed pointers nor the annotations for
them if this knowledge could be deduced by the compiler. Heck, if the
compiler can deduce lifetimes accurately,


It does not deduce anything. It checks that borrowed pointers do not 
outlive their source. Lifetime parameters are used to transport the 
required information across function signatures.



you can get rid of GC and RC,
and just have the compiler insert malloc/free in the right spots.
...


That's a form of GC, and I already acknowledged that global region 
inference exists, but noted that this is not what is used.



Note that there is a Java version that does this partway, sometimes it
will replace a GC object with a stack allocated one if it is successful
in deducing that the object lifetime does not exceed the lifetime of the
function.
...


I know. Also, inference is harder to control and less efficient than 
simply making the relevant information part of type signatures.


http://en.wikipedia.org/wiki/Region-based_memory_management#Region_inference

This work was completed in 1995[9] and integrated into the ML Kit, a 
version of ML based on region allocation in place of garbage collection. 
This permitted a direct comparison between the two on medium-sized test 
programs, yielding widely varying results (between 10 times faster and 
four times slower) depending on how region-friendly the program was; 
compile times, however, were on the order of minutes.





Yes, one is  and the other is @.


No, actually currently one is  and the other is RCT AFAIK.


Then Rust changed again. The document I read on borrowed pointers was
likely out of date, though it had no date on it.
...


Yes, most documents on Rust are at least slightly out of date.




RCT is not more general. It cannot refer to stack-allocated data,
for instance.


So there is no general pointer type that has an unbounded lifetime?
...


How can it be general and have an unbounded lifetime and be safe?




Sure, borrowing is very lightweight, but ultimately what is most
important is
that it solves the problem of multiple incompatible pointer types and
makes the
type system more expressive as well.


Adding more pointer types makes a type system more expressive, by
definition.
...


No. Also, this is irrelevant, because I was highlighting the 
_importance_ of the fact that it does in this case.





A function that uses none of the 

Re: More radical ideas about gc and reference counting

2014-05-12 Thread bearophile via Digitalmars-d

Timon Gehr:

(Probably more, actually, because it does not provide 
precision-unfriendly constructs such as undiscriminated unions.)


I suggested to add an optional method named onGC to unions that 
if present is called at run-time by the GC to know what's the 
real type of stored data, to make tracing more precise.


Bye,
bearophile


Re: More radical ideas about gc and reference counting

2014-05-12 Thread via Digitalmars-d

On Sunday, 11 May 2014 at 18:18:41 UTC, Rainer Schuetze wrote:

For a reasonable GC I currently see 2 possible directions:

1. Use a scheme that takes a snapshot of the heap, stack and 
registers at the moment of collection and do the actual 
collection in another thread/process while the application can 
continue to run. This is the way Leandro Lucarellas concurrent 
GC works (http://dconf.org/2013/talks/lucarella.html), but it 
relies on fork that doesn't exist on every OS/architecture. A 
manual copy of the memory won't scale to very large memory, 
though it might be compressed to possible pointers. Worst case 
it will need twice as much memory as the current heap.


It would be very interesting how far we can push this model on 
the supported platforms.


2. Change the compiler to emit (library defined) write barriers 
for modifications of (possible) pointers. This will allow to 
experiment with more sophisticated GC algorithms (if the write 
barrier is written in D, we might also need pointers without 
barriers to implement it). I know Walter is against this, and I 
am also not sure if this adds acceptable overhead, but we don't 
have proof of the opposite, too.


As we all know, the usual eager reference counting with atomic 
operations is not memory-safe, so my current favorite is 
concurrent buffered reference counting (see chapter 18.2/3 
The garbage collection handbook by Richard Jones et al): 
reference count modifications are not performed directly by the 
write barrier, but it just logs the operation into a thread 
local buffer. This is then processed by a collector thread 
which also detects cycles (only on candidates which had their 
reference count decreased during the last cycle). Except for 
very large reference chains this scales with the number of 
executed pointer modifications and not with the heap size.


I'm surprised that you didn't include:

3. Thread-local GC, isolated zones (restricting where references 
to objects of a particular heap can be placed), exempting certain 
threads from GC completely, ...


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Timon Gehr via Digitalmars-d

On 05/12/2014 10:54 AM, Walter Bright wrote:

On 5/11/2014 10:57 PM, Marco Leise wrote:

Am Sun, 11 May 2014 17:50:25 -0700
schrieb Walter Bright newshou...@digitalmars.com:


As long as those pointers don't escape. Am I right in that one cannot
store a
borrowed pointer into a global data structure?


Right, and that's the point and entirely positive-to-do™.


This means that a global data structure in Rust has to decide what
memory allocation scheme its contents must use,


Global variables are banned in Rust code outside of unsafe blocks.


and cannot (without tagging) mix memory allocation schemes.
...


Tagging won't help with all memory allocation schemes.


For example, let's say a compiler has internally a single hash table of
strings. With a GC, those strings can be statically allocated, or on the
GC heap, or anything with a lifetime longer than the table's.
But I don't see how this could work in Rust.



It's possible if you don't make the table global.
(OTOH in D this is not going to work at all.)


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Steven Schveighoffer via Digitalmars-d
On Mon, 12 May 2014 03:39:12 -0400, Manu via Digitalmars-d  
digitalmars-d@puremagic.com wrote:



On 12 May 2014 17:24, Dicebot via Digitalmars-d



You will like Don's talk this year ;)


I'm super-disappointed I can't make it this year!


?!! http://dconf.org/2014/talks/evans.html


We were evicted from
our house, have to move, and I can't bail for a week and leave that
all on my mrs while she kicks along the fulltime job :(


Oh that sucks...

-Steve


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Steven Schveighoffer via Digitalmars-d
On Sun, 11 May 2014 16:33:04 -0400, Walter Bright  
newshou...@digitalmars.com wrote:



On 5/11/2014 2:48 AM, Benjamin Thaut wrote:
Mostly percise doesn't help. Its either fully percise or beeing stuck  
with a

impercise mark  sweep.


This is not correct. It helps because most of the false pointers will be  
in the heap, and the heap will be accurately scanned, nearly eliminating  
false references to garbage.


It doesn't matter where the false pointers are. The largest issue with  
false pointers is not how many false pointers there are. It only matters  
how large the block is that it points at. The larger your blocks get,  
the more likely they are pointed at by the stack. On 32-bit systems,  
allocate 1/256th of your memory space (i.e. 16.5MB), and the likelihood of  
random data on the stack pointing at it is roughly 1/256. This problem is  
just about eliminated with 64-bit pointers.


Yes. D, for example, requires that objects not be self-referential for  
this reason.


As previously stated, self referencing does not preclude GC moving. This  
statement is simply false, you can self reference in D for objects. You  
cannot for structs, but not because of a possibility for the moving GC,  
but because of the requirement to be able to move a struct instance.


And in fact, even if it's forbidden, requires is too strong a word --  
there is no static or runtime prevention of this.


-Steve


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Manu via Digitalmars-d
On 12 May 2014 18:45, Walter Bright via Digitalmars-d
digitalmars-d@puremagic.com wrote:
 On 5/12/2014 12:12 AM, Manu via Digitalmars-d wrote:

 What? You've never offered me a practical solution.


 I have, you've just rejected them.


 What do I do?


 1. you can simply do C++ style memory management. shared_ptr, etc.

I already have C++. I don't want another one.

 2. you can have the non-pausible code running in a thread that is not
 registered with the gc, so the gc won't pause it. This requires that this
 thread not allocate gc memory, but it can use gc memory allocated by other
 threads, as long as those other threads retain a root to it.

It still sounds the same as manual memory management though in
practise, like you say, the other thread must maintain a root to it,
which means I need to manually retain it somehow, and when the worker
thread finishes with it, it needs to send a signal or something back
to say it's done so it can be released... it sounds more inconvenient
than direct manual memory management in practise.
Sounds slow too. Dec-ing a ref is certainly faster than inter-thread
communication.

This also makes library calls into effective RPC's if I can't call
into them from the active threads.

How long is a collect liable to take in the event the GC threads need
to collect? Am I likely to lose my service threads for 100s of
milliseconds at a time?

I'll think on it, but I don't think there's anything practically
applicable here, and it really sounds like it creates a lot more
trouble and complexity than it addresses.

 3. D allows you to create and use any memory management scheme you want. You
 are simply not locked into GC. For example, I rewrote my Empire game into D
 and it did not do any allocation at all - no GC, not even malloc. I know
 that you'll need to do allocation, I'm just pointing out that GC allocations
 and pauses are hardly inevitable.

C++ lets me create any memory management scheme I like by the same argument.
I lose all the parts of the language that implicitly depend on the GC,
and 3rd party libs (that don't care about me and my project).
Why isn't it a reasonable argument to say that not having access to
libraries is completely unrealistic? You can't write modern software
without extensive access to libraries. Period.

I've said before, I don't want to be a second class citizen with
access to only a subset of the language.

 4. for my part, I have implemented @nogc so you can track down gc usage in
 code. I have also been working towards refactoring Phobos to eliminate
 unnecessary GC allocations and provide alternatives that do not allocate GC
 memory. Unfortunately, these PR's just sit there.

The effort is appreciated, but it was never a solution. I said @nogc
was the exact wrong approach to my situation right from the start, and
I predicted that would be used as an argument the moment it appeared.
Tracking down GC usage isn't helpful when it leads you to a lib call
that you can't change. And again, eliminating useful and productive
parts of the language is not a goal we should be shooting for.

I'll find it useful in the high-performance realtime bits; ie, the
bits that I typically disassemble and scrutinise after every compile.
But that's not what we're discussing here.
I'm happy with D for my realtime code, I have the low-level tools I
need to make the real-time code run fast. @nogc is a little bonus that
will allow to guarantee no sneaky allocations are finding their way
into the fast code, and that might save a little time, but I never
really saw that as a significant problem in the first place.

What we're talking about is productivity, convenience and safety in
the non-realtime code. The vast majority of code, that programmers
spend most of their days working on.


Consider it this way... why do you have all these features in D that
cause implicit allocation if you don't feel they're useful and
important parts of the language?
Assuming you do feel they're important parts of the language, why do
you feel it's okay to tell me I don't deserve access to them?
Surely I'm *exactly* the target market for D...? High-pressure,
intensive production environments, still depending exclusively on
native code, with code teams often in the realm of 50-100, containing
many juniors, aggressive schedules which can't afford to waste
engineering hours... this is a code environment that's prone to MANY
bugs, and countless wasted hours as a consequence.
Convenience and safety are important to me... I don't know what you
think I'm interested in D for if you think I should be happy to
abandon a whole chunk of the language, just because I have a couple of
realtime threads :/


 5. you can divide your app into multiple processes that communicate via
 interprocess communication. One of them pausing will not pause the others.
 You can even do things like turn off the GC collections in those processes,
 and when they run out of memory just kill them and restart them. (This is
 not 

Re: More radical ideas about gc and reference counting

2014-05-12 Thread bearophile via Digitalmars-d

Manu:


we are an industry in desperate need of salvation,
it's LONG overdue, and I want something that actually works 
well for us, not a crappy set of compromises because the

language has a fundamental incompatibility with my industry :/


Perhaps the game industry has to start the creation of a language 
designed for its needs, like the scientific people have done 
(Julia), the browser ones (Rust), the Web ones have done, etc. 
With lot of work in less than ten years you can have an usable 
language.


Bye,
bearophile


Re: More radical ideas about gc and reference counting

2014-05-12 Thread via Digitalmars-d

On Monday, 12 May 2014 at 08:45:56 UTC, Walter Bright wrote:
2. you can have the non-pausible code running in a thread that 
is not registered with the gc, so the gc won't pause it. This 
requires that this thread not allocate gc memory, but it can 
use gc memory allocated by other threads, as long as those 
other threads retain a root to it.


This and @nogc is a very promising trend, but you should still be 
able to partion the GC search space.


The key to controlled real time performance is to partition the 
search space, that goes for anything algorithmic; memory 
management inclusive. That applies to any scheme like owned, ARC, 
GC etc. It makes little sense having to trace everything if only 
the physics engine is the one churning memory like crazy.


And fork() is not a solution without extensive structuring of 
allocations. Stuff like physics touch all pages the physics 
objects are onto like 100+ times per second, so you need to group 
allocations to pages based on usage patterns. (No point in 
forking if you get write traps on 50.000 pages the next time the 
physics engine run :-).


3. D allows you to create and use any memory management scheme 
you want. You are simply not locked into GC. For example, I 
rewrote my Empire game into D and it did not do any allocation 
at all - no GC, not even malloc. I know that you'll need to do 
allocation, I'm just pointing out that GC allocations and 
pauses are hardly inevitable.


This is no doubt the best approach for a MMO client. You have a 
window on the world and cache as much as possible both to memory 
and disk. Basically get as much memory from the OS as you can 
hoard (with headroom set by heuristics) when your application has 
focus and release caches that are outside the window when you 
loose focus to another application. This means you need a 
dedicated runtime for games that can delay GC collection and eat 
into the caches when you are low on computational resources.


You also need to distinguish between memory that is locked to RAM 
and memory that can swap. You should always lock memory for real 
time threads. So if you want to GC this, you need a GC that 
support multiple heaps.


(Some hardware might also distinguish between RAM that is 
accessible by the GPU or that has different characteristics in 
areas such as persistence or performance.)


5. you can divide your app into multiple processes that 
communicate via interprocess communication. One of them pausing 
will not pause the others. You can even do things like turn off


Why processes and not threads with their own local GC?

6. If you call C++ libs, they won't be allocating memory with 
the D GC. D code can call C++ code. If you run those C++ libs


But what happens if that C++ code does new 
HeapStuff(D_allocated_memory) and then calls back to D? You 
cannot presume that C++ coders have the discipline to always 
allocate local memory from the stack, so basically you cannot GC 
collect while there are C++ functions on the stack. In order to 
get there the GC collector needs to understand the malloc heap 
and trace that one too.


Auditing all C++ libraries I want to use is too much work, and 
tracing the malloc heap is too time consuming, so at the end of 
the day you'll get a more robust environment by only scanning 
(tracing) the stacks when there is only D function calls on the 
stack, with a precise collector.


That means you need to partition the search space otherwise the 
collector might not run in time.


Freezing the world is really ugly. Most applications are actually 
soft real time.


Games are part hard real time, part soft real time. The 
difference between games and other applications is that there is 
less headroom so you have to do more work to make the glitches 
and stuttering occur sufficiently seldom to be perceived as 
acceptable by the end user. But games are not special.





Re: More radical ideas about gc and reference counting

2014-05-12 Thread Walter Bright via Digitalmars-d

On 5/12/2014 3:18 AM, Marco Leise wrote:

Your were arguing against Michel Fortin's proposal on the
surface, when your requirement cannot even be fulfilled
theoretically it seems.


Lots of people use ARC without a GC.



Which could mean that you don't like
the idea of replacing D's GC with an ARC solution.


I don't like the idea of replacing D's GC with ARC. But for different reasons.



Re: More radical ideas about gc and reference counting

2014-05-12 Thread Walter Bright via Digitalmars-d

On 5/12/2014 5:15 AM, Timon Gehr wrote:

On 05/12/2014 10:54 AM, Walter Bright wrote:

On 5/11/2014 10:57 PM, Marco Leise wrote:

Am Sun, 11 May 2014 17:50:25 -0700
schrieb Walter Bright newshou...@digitalmars.com:


As long as those pointers don't escape. Am I right in that one cannot
store a
borrowed pointer into a global data structure?


Right, and that's the point and entirely positive-to-do™.


This means that a global data structure in Rust has to decide what
memory allocation scheme its contents must use,


Global variables are banned in Rust code outside of unsafe blocks.


Global can also mean assigning through a reference passed as a parameter.



Re: More radical ideas about gc and reference counting

2014-05-12 Thread Manu via Digitalmars-d
On 13 May 2014 02:16, bearophile via Digitalmars-d
digitalmars-d@puremagic.com wrote:
 Manu:


 we are an industry in desperate need of salvation,
 it's LONG overdue, and I want something that actually works well for us,
 not a crappy set of compromises because the
 language has a fundamental incompatibility with my industry :/


 Perhaps the game industry has to start the creation of a language designed
 for its needs, like the scientific people have done (Julia), the browser
 ones (Rust), the Web ones have done, etc. With lot of work in less than ten
 years you can have an usable language.

But D is *so close*... and I like it! _

I have to say that this discussion has certainly left me somewhat
intrigued by Rust though.
I've never given it a fair go because I find the syntax so distasteful
and deterring.
I wonder if there's a market for a rust fork that re-skin's the language ;)


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Walter Bright via Digitalmars-d

On 5/12/2014 2:12 AM, Dicebot wrote:

I think this is more of library writing culture problem than engineering
problem. High quality library shouldn't rely on any internal allocations at all,
deferring this decision to user code. Otherwise you will eventually have
problems, GC or not.


Consider my PR:

 https://github.com/D-Programming-Language/phobos/pull/2149

This is exactly what it does - it 'pushes' the decisions about allocating memory 
up out of the library to the user. I suspect a great deal of storage allocation 
can be removed from Phobos with this technique, without sacrificing performance, 
flexibility, or memory safety. (In fact, it improves on performance and 
flexibility!)



I also agree with your larger point that if you are relying on an unknown 
library for time critical code, and that library was not designed with time 
criticality guarantees in mind, you're going to have nothing but trouble. 
Regardless of GC or RC.


Re: More radical ideas about gc and reference counting

2014-05-12 Thread via Digitalmars-d

On Monday, 12 May 2014 at 16:16:06 UTC, bearophile wrote:
Perhaps the game industry has to start the creation of a 
language designed for its needs, like the scientific people 
have done (Julia), the browser ones (Rust), the Web ones have 
done, etc. With lot of work in less than ten years you can have 
an usable language.


I don't think games are unique or special. Most games are even in 
the easy space by having mostly static data. Meaning the amount 
of unexpected dynamic data is pretty low.  Games also have the 
luxury of redefining the requirements spec to match available 
technology. The games industry does however have its own culture 
and paradigms and fashions… With subcultures.


However, most interactive applications will suffer from the same 
issues if you increase the load so that they run out of headroom. 
Even unix commands like find and grep have latency requirements 
if the interaction is to be pleasant. By good fortune find and 
grep haven't changed their interface for 40+ years, so they 
were designed for low performance CPUs. That does not mean that 
you cannot design a better find-like application today that 
will run into runtime related usability issues if you freeze the 
world.


At the end of the day, a system level language should support key 
strategies used for writing performant system level code in a 
reliable manner. It should also not lock you to a specific 
runtime that you couldn't easily write yourself. It should also 
not lock you to a specific model of how to structure your code 
(like monitors). I am not even sure it should provide OS 
abstractions, because that is not really system level 
programming. That is unixy (Posix) programming. A system level 
programming language should be free of OS and modelling related 
legacy.


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Dicebot via Digitalmars-d

On Monday, 12 May 2014 at 17:03:18 UTC, Walter Bright wrote:

On 5/12/2014 2:12 AM, Dicebot wrote:
I think this is more of library writing culture problem than 
engineering
problem. High quality library shouldn't rely on any internal 
allocations at all,
deferring this decision to user code. Otherwise you will 
eventually have

problems, GC or not.


Consider my PR:

 https://github.com/D-Programming-Language/phobos/pull/2149

This is exactly what it does - it 'pushes' the decisions about 
allocating memory up out of the library to the user. I suspect 
a great deal of storage allocation can be removed from Phobos 
with this technique, without sacrificing performance, 
flexibility, or memory safety. (In fact, it improves on 
performance and flexibility!)


We have already had discussion where I did state that current 
@nogc implementation is not robust enough and failed to explain 
the use case for weaker @nogc clearly. Conclusion was that we 
should return to this topic after Don's DConf talk ;)


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Dicebot via Digitalmars-d
On Monday, 12 May 2014 at 17:03:41 UTC, Manu via Digitalmars-d 
wrote:

But D is *so close*... and I like it! _

I have to say that this discussion has certainly left me 
somewhat

intrigued by Rust though.
I've never given it a fair go because I find the syntax so 
distasteful

and deterring.
I wonder if there's a market for a rust fork that re-skin's the 
language ;)


Right now D has practical benefit of being more stable and 
library rich. But switching to Rust eventually does seem tempting 
as I find foundations of their type system much closer to my 
beliefs about good coding practices.


It lacks any good static reflection though. And this stuff is 
damn addictive when you try it of D caliber.


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Walter Bright via Digitalmars-d

On 5/12/2014 4:35 AM, bearophile wrote:

I suggested to add an optional method named onGC to unions that if present is
called at run-time by the GC to know what's the real type of stored data, to
make tracing more precise.


Unions of pointers are so rare in actual code that treating them conservatively 
is not a big problem.




Re: More radical ideas about gc and reference counting

2014-05-12 Thread bearophile via Digitalmars-d

Walter Bright:

Unions of pointers are so rare in actual code that treating 
them conservatively is not a big problem.


std.variant.Algebraic is based on on std.variant.VariantN, and on 
std.variant.VariantN is based on an union, and often you use 
algebraic data types to represent trees and similar data 
structures that contain many references/pointers. Adding Adding 
an onGC() method to std.variant.VariantN you allow the GC to 
manage Algebraic well enough.


Bye,
bearophile


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Manu via Digitalmars-d
On 13 May 2014 03:17, Walter Bright via Digitalmars-d
digitalmars-d@puremagic.com wrote:
 On 5/12/2014 4:35 AM, bearophile wrote:

 I suggested to add an optional method named onGC to unions that if
 present is
 called at run-time by the GC to know what's the real type of stored data,
 to
 make tracing more precise.


 Unions of pointers are so rare in actual code that treating them
 conservatively is not a big problem.

I find it fairly common.
I just searched through my code, and 7 out of 12 unions had pointers.


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Manu via Digitalmars-d
On 13 May 2014 03:14, Dicebot via Digitalmars-d
digitalmars-d@puremagic.com wrote:
 On Monday, 12 May 2014 at 17:03:41 UTC, Manu via Digitalmars-d wrote:

 But D is *so close*... and I like it! _

 I have to say that this discussion has certainly left me somewhat
 intrigued by Rust though.
 I've never given it a fair go because I find the syntax so distasteful
 and deterring.
 I wonder if there's a market for a rust fork that re-skin's the language
 ;)


 Right now D has practical benefit of being more stable and library rich. But
 switching to Rust eventually does seem tempting as I find foundations of
 their type system much closer to my beliefs about good coding practices.

 It lacks any good static reflection though. And this stuff is damn addictive
 when you try it of D caliber.

They have a lot more work to do.
There doesn't seem to be a useful windows compiler for a start... _


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Walter Bright via Digitalmars-d

On 5/12/2014 10:31 AM, Manu via Digitalmars-d wrote:

I just searched through my code, and 7 out of 12 unions had pointers.


Relative number of objects with unions, not declarations with unions!


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Walter Bright via Digitalmars-d

On 5/12/2014 10:07 AM, Dicebot wrote:

We have already had discussion where I did state that current @nogc
implementation is not robust enough and failed to explain the use case for
weaker @nogc clearly. Conclusion was that we should return to this topic after
Don's DConf talk ;)


Sure - next week!


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Walter Bright via Digitalmars-d

On 5/12/2014 10:25 AM, bearophile wrote:

Walter Bright:


Unions of pointers are so rare in actual code that treating them
conservatively is not a big problem.


std.variant.Algebraic is based on on std.variant.VariantN, and on
std.variant.VariantN is based on an union, and often you use algebraic data
types to represent trees and similar data structures that contain many
references/pointers. Adding Adding an onGC() method to std.variant.VariantN you
allow the GC to manage Algebraic well enough.


BTW, the RTinfo can be used to discriminate unions.



Re: More radical ideas about gc and reference counting

2014-05-12 Thread bearophile via Digitalmars-d

Walter Bright:


BTW, the RTinfo can be used to discriminate unions.


I don't know if std.variant.VariantN is already using such 
RTinfo. I don't know much about RTinfo.


Bye,
bearophile


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Walter Bright via Digitalmars-d

On 5/12/2014 7:46 AM, Steven Schveighoffer wrote:

It doesn't matter where the false pointers are. The largest issue with false
pointers is not how many false pointers there are. It only matters how large the
block is that it points at. The larger your blocks get, the more likely they
are pointed at by the stack. On 32-bit systems, allocate 1/256th of your
memory space (i.e. 16.5MB), and the likelihood of random data on the stack
pointing at it is roughly 1/256. This problem is just about eliminated with
64-bit pointers.


Generally, it is a bad idea to allocate such large blocks on the GC heap. GC's 
work best when the size of the objects being allocated is very small relative to 
the size of the heap space.


Fortunately, it's a mathematical inevitability that large allocations relative 
to the GC size are rare, and so it isn't much of a pain to handle them manually.




And in fact, even if it's forbidden, requires is too strong a word -- there is
no static or runtime prevention of this.


It's still forbidden. Andrei wrote a template that will verify this at runtime, 
but I don't recall its name.




Re: More radical ideas about gc and reference counting

2014-05-12 Thread Manu via Digitalmars-d
On 13 May 2014 03:44, Walter Bright via Digitalmars-d
digitalmars-d@puremagic.com wrote:
 On 5/12/2014 10:31 AM, Manu via Digitalmars-d wrote:

 I just searched through my code, and 7 out of 12 unions had pointers.


 Relative number of objects with unions, not declarations with unions!

Ah, well I have 3 different tree/graph structures with unions, and
tree/graph nodes have a tendency to accumulate many instances.


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Kapps via Digitalmars-d

On Monday, 12 May 2014 at 16:03:28 UTC, Manu via Digitalmars-d
wrote:
How long is a collect liable to take in the event the GC 
threads need

to collect? Am I likely to lose my service threads for 100s of
milliseconds at a time?

I'll think on it, but I don't think there's anything practically
applicable here, and it really sounds like it creates a lot more
trouble and complexity than it addresses.



Your concerns stem not as much from the speed concern of the GC,
but from the freeze-the-world aspect of it. Would a concurrent
collector not solve these issues? As
http://msdn.microsoft.com/en-us/library/ee787088%28v=vs.110%29.aspx#concurrent_garbage_collection
explains a little bit, the actual time your threads spend frozen
should be little (but I admit I don't know exactly how little),
and so long as you don't allocate too much during the collection
itself (which you say you don't), you should be able to keep
running your code during the collection. If it's not possible to
implement concurrent collection in D (and it's already been shown
it is possible), then I'd agree that ARC is very important. But
depending on how little the stop-the-world time from a concurrent
GC can get, perhaps this could work around some issues that
you're desiring ARC for. A generational collector could help in
theory with your high memory usage situations. I doubt you
allocate a gigabyte each frame, so the actual generation 0
content should be fairly low. Much of your memory usage should be
allocations that will not be freed for long periods of time,
while the per-frame and other short allocations should be fast to
collect as there aren't many of them.

Depending on how tunable the GC is, I feel like it should be
possible to get away with a GC even for soft real-time programs
like games. The problem is it's hard to tell until we get a
proper concurrent collector in D2, just like it's hard to tell
how significant the impact of ARC is until we get an optimized
implementation of it in the compiler. Neither of these is simple.
I do quite like the idea of ARC, it's just something that someone
would have to actually implement (well) in order to see how much
of an impact it really has in D. For the truly low frequency
situations, you could get away with a library type for ARC as
well, and as you mentioned, for high frequency you would get
around ARC regardless.


Re: More radical ideas about gc and reference counting

2014-05-12 Thread via Digitalmars-d

On Monday, 12 May 2014 at 17:52:18 UTC, Walter Bright wrote:

On 5/12/2014 7:46 AM, Steven Schveighoffer wrote:
pointing at it is roughly 1/256. This problem is just about 
eliminated with

64-bit pointers.


Not generally true. This presumes that the heap is not in the 
lower region of the address space and that you don't use 64 bit 
ints on the stack.


Generally, it is a bad idea to allocate such large blocks on 
the GC heap. GC's work best when the size of the objects being 
allocated is very small relative to the size of the heap space.


Generally not true. This is a deficiency of not having a smart 
allocator / precise scanning that use available meta information 
properly (obtained statically or by profiling).


Fortunately, it's a mathematical inevitability that large 
allocations relative to the GC size are rare, and so it isn't 
much of a pain to handle them manually.


Programmer pain is not measured in number of instances, but in 
terms of model complexity.


Ola.


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Andrei Alexandrescu via Digitalmars-d

On 5/12/14, 10:25 AM, bearophile wrote:

Walter Bright:


Unions of pointers are so rare in actual code that treating them
conservatively is not a big problem.


std.variant.Algebraic is based on on std.variant.VariantN, and on
std.variant.VariantN is based on an union, and often you use algebraic
data types to represent trees and similar data structures that contain
many references/pointers. Adding Adding an onGC() method to
std.variant.VariantN you allow the GC to manage Algebraic well enough.


I, too, felt the need of onGC() - actually preGC() - in my allocators 
implementation.


Specifically, a thread-local freelist would save a pointer to the root 
in thread-local storage (i.e. a traditional D global variable). That 
would thread through a number of free nodes available for allocation.


When a GC cycle occurs, it's okay if the list stays referenced; the GC 
will consider it used and won't do anything in particular about it. 
However, the GC cycle is a good opportunity to clean these freelists and 
offer the memory for other size classes, seeing as the freelists may 
grow unreasonably large and then just hold memory for no good reason.


A hook that nulls all freelist heads just as the collection process 
starts would be helpful.



Andrei



Re: More radical ideas about gc and reference counting

2014-05-12 Thread Dmitry Olshansky via Digitalmars-d

12-May-2014 22:08, Andrei Alexandrescu пишет:

On 5/12/14, 10:25 AM, bearophile wrote:
A hook that nulls all freelist heads just as the collection process
starts would be helpful.


One word - weak pointers. Then head of freelist is weak and can be 
collected at whim.





Andrei




--
Dmitry Olshansky


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Andrei Alexandrescu via Digitalmars-d

On 5/12/14, 11:17 AM, Dmitry Olshansky wrote:

12-May-2014 22:08, Andrei Alexandrescu пишет:

On 5/12/14, 10:25 AM, bearophile wrote:
A hook that nulls all freelist heads just as the collection process
starts would be helpful.


One word - weak pointers. Then head of freelist is weak and can be
collected at whim.


Of course. My point here is that here you need simpler support than 
full-blown weak pointers. -- Andrei




Re: More radical ideas about gc and reference counting

2014-05-12 Thread Manu via Digitalmars-d
On 13 May 2014 04:07, Kapps via Digitalmars-d
digitalmars-d@puremagic.com wrote:
 On Monday, 12 May 2014 at 16:03:28 UTC, Manu via Digitalmars-d
 wrote:

 How long is a collect liable to take in the event the GC threads need

 to collect? Am I likely to lose my service threads for 100s of
 milliseconds at a time?

 I'll think on it, but I don't think there's anything practically
 applicable here, and it really sounds like it creates a lot more
 trouble and complexity than it addresses.



 Your concerns stem not as much from the speed concern of the GC,
 but from the freeze-the-world aspect of it. Would a concurrent
 collector not solve these issues?

I originally thought it would... but the more I think on it, I don't
think it would make an awful lot of difference in practise.
If the stalls were 'short' (like 1-5ms on the background threads,
500µs-1ms on the realtime threads), then maybe it would be workable,
but I don't know that it would be even close to that?

Also, I think it would be very difficult to implement on a machine
without virtual memory, or much of an operating system in general?

The problem remains that with no free memory, frequency of collection
becomes so high, that it's extremely unlikely full collection so often
would be better than ARC.

 As 
 http://msdn.microsoft.com/en-us/library/ee787088%28v=vs.110%29.aspx#concurrent_garbage_collection
 explains a little bit, the actual time your threads spend frozen
 should be little (but I admit I don't know exactly how little),
 and so long as you don't allocate too much during the collection
 itself (which you say you don't), you should be able to keep
 running your code during the collection. If it's not possible to
 implement concurrent collection in D (and it's already been shown
 it is possible), then I'd agree that ARC is very important. But
 depending on how little the stop-the-world time from a concurrent
 GC can get, perhaps this could work around some issues that
 you're desiring ARC for. A generational collector could help in
 theory with your high memory usage situations. I doubt you
 allocate a gigabyte each frame, so the actual generation 0
 content should be fairly low. Much of your memory usage should be
 allocations that will not be freed for long periods of time,
 while the per-frame and other short allocations should be fast to
 collect as there aren't many of them.

Yeah, it would probably be better, if it's possible.
Implementation needs to be considered from the perspective of embedded
systems with no OS or MMU, and as little as 64mb of ram (the smallest
modern systems).
Mid-range systems are 512mb and no MMU. 'next-gen' systems are
basically like little PC's with crappy OS's, so more likely a decent
GC is possible on a ps4/xbone... but very few have the luxury of
developing for just one system.

It occurred to me earlier that things like strings might enjoy their
own separate heap. And maybe some special logic for strings that
outlived their scope to be actively returned to their heap rather than
waiting for collection.
If the heap were successfully broken down into a suite of sub-heaps, I
have absolutely no idea how to make estimates about the performance of
this system, and if it would approach an acceptable level. I'm
skeptical it would, and it still won't decrease collection frequency.
But I'd be happy to be surprised.

 Depending on how tunable the GC is, I feel like it should be
 possible to get away with a GC even for soft real-time programs
 like games. The problem is it's hard to tell until we get a
 proper concurrent collector in D2, just like it's hard to tell
 how significant the impact of ARC is until we get an optimized
 implementation of it in the compiler. Neither of these is simple.
 I do quite like the idea of ARC, it's just something that someone
 would have to actually implement (well) in order to see how much
 of an impact it really has in D.

I understand the problem. The first hurdle is overcoming the hostility
against it though. There is a severe prejudice.

 For the truly low frequency
 situations, you could get away with a library type for ARC as
 well, and as you mentioned, for high frequency you would get
 around ARC regardless.

Yup.



Re: More radical ideas about gc and reference counting

2014-05-12 Thread via Digitalmars-d

On Monday, 12 May 2014 at 18:07:51 UTC, Kapps wrote:

Depending on how tunable the GC is, I feel like it should be
possible to get away with a GC even for soft real-time programs
like games.


Even if you manage to make it work for game clients you also 
should make it work for low latency game servers, as code sharing 
is an important advantage.


What a game/world server requires differs a lot, but highly 
dynamic and flexible worlds have to keep the physics to a single 
node (or tight cluster) for a region. That means you want to have 
as many players as possible tied to that node.


In essence you want both performance, low latency, reliability, 
and little overhead in an evolutionary context (it has to support 
heavy modification over time).


My gut feeling is that a runtime satisfying one game design will 
not satisfy another one as long as one insists on one global GC. 
In essence, it will never really work well. IMO, the same goes 
for ARC since RC does not perform well with multi-threading even 
when you use near optimal patterns and strategies. If ARC is only 
to be used where speed does not matter then you might as well use 
shared_ptr.


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Timon Gehr via Digitalmars-d

On 05/12/2014 06:37 PM, Walter Bright wrote:

On 5/12/2014 5:15 AM, Timon Gehr wrote:

On 05/12/2014 10:54 AM, Walter Bright wrote:

On 5/11/2014 10:57 PM, Marco Leise wrote:

Am Sun, 11 May 2014 17:50:25 -0700
schrieb Walter Bright newshou...@digitalmars.com:


As long as those pointers don't escape. Am I right in that one cannot
store a
borrowed pointer into a global data structure?


Right, and that's the point and entirely positive-to-do™.


This means that a global data structure in Rust has to decide what
memory allocation scheme its contents must use,


Global variables are banned in Rust code outside of unsafe blocks.


Global can also mean assigning through a reference passed as a parameter.



Do you mean the table is not actually global but passed by parameter, or 
that the global table is accessed in unsafe code and then passed by 
parameter or something else?


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Jacob Carlborg via Digitalmars-d

On 2014-05-12 19:14, Dicebot wrote:


It lacks any good static reflection though. And this stuff is damn
addictive when you try it of D caliber.


It has macros, that basically requires great support for static 
reflection to be usable.


--
/Jacob Carlborg


Re: More radical ideas about gc and reference counting

2014-05-12 Thread bearophile via Digitalmars-d

Andrei Alexandrescu:

I, too, felt the need of onGC() - actually preGC() - in my 
allocators implementation.

...
A hook that nulls all freelist heads just as the collection 
process starts would be helpful.


How is this going to help increase tracing precision of unions 
(and Algebraic built on top of unions)?


Bye,
bearophile


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Andrei Alexandrescu via Digitalmars-d

On 5/12/14, 12:59 PM, bearophile wrote:

Andrei Alexandrescu:


I, too, felt the need of onGC() - actually preGC() - in my allocators
implementation.
...
A hook that nulls all freelist heads just as the collection process
starts would be helpful.


How is this going to help increase tracing precision of unions (and
Algebraic built on top of unions)?


How did I give the impression it has anything to do with unions? -- Andrei




Re: More radical ideas about gc and reference counting

2014-05-12 Thread bearophile via Digitalmars-d

Andrei Alexandrescu:

How did I give the impression it has anything to do with 
unions? -- Andrei


OK, so yours is not an answer to my proposal, nor related to it.

Bye,
bearophile


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Walter Bright via Digitalmars-d

On 5/12/2014 12:36 PM, Timon Gehr wrote:

Do you mean the table is not actually global but passed by parameter,


Yes.

But note that the distinction between the two is often blurry. Under the hood on 
some systems, global data is accessed via the equivalent of a hidden parameter.


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Steven Schveighoffer via Digitalmars-d
On Mon, 12 May 2014 14:14:28 -0400, Ola Fosheim Grøstad  
ola.fosheim.grostad+dl...@gmail.com wrote:



On Monday, 12 May 2014 at 17:52:18 UTC, Walter Bright wrote:

On 5/12/2014 7:46 AM, Steven Schveighoffer wrote:
pointing at it is roughly 1/256. This problem is just about eliminated  
with

64-bit pointers.


Not generally true. This presumes that the heap is not in the lower  
region of the address space and that you don't use 64 bit ints on the  
stack.


I was thinking in terms of purely a random number happening to point at  
heap data. Practically speaking, I don't know the true likelihood based on  
the heap address scheme of 64-bit OSes, but I know that we always have a  
complainer who will try and do an array-append test on 32-bit code, and  
end up exhausting memory unexpectedly.


-Steve


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Xavier Bigand via Digitalmars-d

Le 12/05/2014 19:14, Dicebot a écrit :

On Monday, 12 May 2014 at 17:03:41 UTC, Manu via Digitalmars-d wrote:

But D is *so close*... and I like it! _

I have to say that this discussion has certainly left me somewhat
intrigued by Rust though.
I've never given it a fair go because I find the syntax so distasteful
and deterring.
I wonder if there's a market for a rust fork that re-skin's the
language ;)


Right now D has practical benefit of being more stable and library rich.
But switching to Rust eventually does seem tempting as I find
foundations of their type system much closer to my beliefs about good
coding practices.

It lacks any good static reflection though. And this stuff is damn
addictive when you try it of D caliber.


All compile time things of D are marvelous.
This with the compile time and the language less error prone make me want D.
I am not sure I need safety so much. It's nice but not mandatory for any 
of my projects. The only one which has to be safe is DQuick.




Re: More radical ideas about gc and reference counting

2014-05-12 Thread Steven Schveighoffer via Digitalmars-d
On Mon, 12 May 2014 13:52:20 -0400, Walter Bright  
newshou...@digitalmars.com wrote:



On 5/12/2014 7:46 AM, Steven Schveighoffer wrote:
It doesn't matter where the false pointers are. The largest issue with  
false
pointers is not how many false pointers there are. It only matters how  
large the
block is that it points at. The larger your blocks get, the more  
likely they
are pointed at by the stack. On 32-bit systems, allocate 1/256th of  
your
memory space (i.e. 16.5MB), and the likelihood of random data on the  
stack
pointing at it is roughly 1/256. This problem is just about eliminated  
with

64-bit pointers.


Generally, it is a bad idea to allocate such large blocks on the GC  
heap. GC's work best when the size of the objects being allocated is  
very small relative to the size of the heap space.


Fortunately, it's a mathematical inevitability that large allocations  
relative to the GC size are rare, and so it isn't much of a pain to  
handle them manually.


The issue arises when one allocates such a large block for temporary use  
repeatedly, but expects it to be collected between allocations. The  
consequences are extremely disastrous.


The workaround is simply to keep it around, but that's not always a  
scalable solution.


And in fact, even if it's forbidden, requires is too strong a word --  
there is

no static or runtime prevention of this.


It's still forbidden. Andrei wrote a template that will verify this at  
runtime, but I don't recall its name.


Can you cite the spec where it says it's forbidden? Forgotten templates  
are not a convincing argument.


Regardless, Java can use a moving GC, and allows self references. The idea  
that self references prevent a moving GC is simply false. If you think  
about it a bit, you will understand why.


-Steve


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Steven Schveighoffer via Digitalmars-d
On Mon, 12 May 2014 17:32:09 -0400, Steven Schveighoffer  
schvei...@yahoo.com wrote:





The workaround is simply to keep it around, but that's not always a  
scalable solution.


Sorry, actually you can free it. That's the correct workaround.

-Steve


Re: More radical ideas about gc and reference counting

2014-05-12 Thread via Digitalmars-d
On Monday, 12 May 2014 at 21:22:09 UTC, Steven Schveighoffer 
wrote:
On Mon, 12 May 2014 14:14:28 -0400, Ola Fosheim Grøstad 
ola.fosheim.grostad+dl...@gmail.com wrote:



On Monday, 12 May 2014 at 17:52:18 UTC, Walter Bright wrote:

On 5/12/2014 7:46 AM, Steven Schveighoffer wrote:
pointing at it is roughly 1/256. This problem is just about 
eliminated with

64-bit pointers.


Not generally true. This presumes that the heap is not in the 
lower region of the address space and that you don't use 64 
bit ints on the stack.


I was thinking in terms of purely a random number happening to 
point at heap data. Practically speaking, I don't know the true 
likelihood based on the heap address scheme of 64-bit OSes, but


Wicked topic. In AMD64 mode hi-mem is usually reserved for kernel 
etc. Traditionally the unixy heap grew from low towards high 
addresses:


http://en.wikipedia.org/wiki/Sbrk

But that is legacy. I think mmap is it… :-P And layout is 
randomized to reduce the effect of buffer overflow etc.


:-(

I know that we always have a complainer who will try and do an 
array-append test on 32-bit code, and end up exhausting memory 
unexpectedly.


Uhuh. Not focusing on precise collection gets ugly.



Re: More radical ideas about gc and reference counting

2014-05-12 Thread Martin Nowak via Digitalmars-d

On 05/11/2014 08:18 PM, Rainer Schuetze wrote:


1. Use a scheme that takes a snapshot of the heap, stack and registers
at the moment of collection and do the actual collection in another
thread/process while the application can continue to run. This is the
way Leandro Lucarellas concurrent GC works
(http://dconf.org/2013/talks/lucarella.html), but it relies on fork
that doesn't exist on every OS/architecture. A manual copy of the memory
won't scale to very large memory, though it might be compressed to
possible pointers. Worst case it will need twice as much memory as the
current heap.


There is a problem with this scheme, copy-on-write is extremely 
expensive when a mutation happens. That's one page fault (context 
switch) + copying a whole page + mapping the new page. It's much worse 
with huge pages (2MB page size).


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Kapps via Digitalmars-d

On Monday, 12 May 2014 at 19:13:50 UTC, Ola Fosheim Grøstad wrote:

On Monday, 12 May 2014 at 18:07:51 UTC, Kapps wrote:

Depending on how tunable the GC is, I feel like it should be
possible to get away with a GC even for soft real-time programs
like games.


Even if you manage to make it work for game clients you also 
should make it work for low latency game servers, as code 
sharing is an important advantage.


What a game/world server requires differs a lot, but highly 
dynamic and flexible worlds have to keep the physics to a 
single node (or tight cluster) for a region. That means you 
want to have as many players as possible tied to that node.


In essence you want both performance, low latency, reliability, 
and little overhead in an evolutionary context (it has to 
support heavy modification over time).


My gut feeling is that a runtime satisfying one game design 
will not satisfy another one as long as one insists on one 
global GC. In essence, it will never really work well. IMO, the 
same goes for ARC since RC does not perform well with 
multi-threading even when you use near optimal patterns and 
strategies. If ARC is only to be used where speed does not 
matter then you might as well use shared_ptr.


.NET allows configuring the garbage collector by specifying
workstation (concurrent, background [allow generation 0/1
collection while a generation 2 collection is going], one primary
heap and a large object heap) or server (not certain if
concurrent/background, but multiple heaps that get handled in
parallel during collections). Or in situations where you have
many processes running at once, disabling concurrent collection
to reduce context switching overhead. In reality, most people
leave the default concurrent collector, which is what I'd hope
the default for D would be, but if it was sufficiently tunable
something like vibe.d could decide to go with something more
similar to what .NET uses for servers (which ASP.NET uses by
default).

I haven't been able to find good concrete numbers online, but the
few sources I've found say that generation 0/1 collection tends
to take 1 to 2-3 milliseconds and is not run concurrently
because it's so short. This is quite sufficient for most
projects, but perhaps could be tweaked a bit more for certain
aspects like gaming, possibly even enabling concurrent collection
for generation 0/1, but I'm not sure if this works well or is
feasible. Still, the important thing is to get a good general one
to use first, like the default one .NET uses for workstation
applications.


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Walter Bright via Digitalmars-d

On 5/12/2014 2:32 PM, Steven Schveighoffer wrote:

It's still forbidden. Andrei wrote a template that will verify this at
runtime, but I don't recall its name.


Can you cite the spec where it says it's forbidden? Forgotten templates are not
a convincing argument.

Regardless, Java can use a moving GC, and allows self references. The idea that
self references prevent a moving GC is simply false. If you think about it a
bit, you will understand why.



I see this is not specified in the documentation. Not sure what happened here, 
but I'll have to think about it.


Re: More radical ideas about gc and reference counting

2014-05-12 Thread via Digitalmars-d

On Monday, 12 May 2014 at 22:27:06 UTC, Kapps wrote:

because it's so short. This is quite sufficient for most
projects, but perhaps could be tweaked a bit more for certain
aspects like gaming, possibly even enabling concurrent 
collection

for generation 0/1, but I'm not sure if this works well or is
feasible. Still, the important thing is to get a good general 
one

to use first, like the default one .NET uses for workstation
applications.


I agree that getting a good (100% precise) GC is an important 
first step.  I am not so sure about generation based GC when you 
have a window on a world map that you move around which roughly 
is FIFO (first in, first out).


But to get good speed I think you are better off having multiple 
pools that can be released with no collection when a 
network-connection drops (if you have one conceptual pool per 
connection), and optimized allocators that give you 
pre-initialized objects etc.


In the ideal world all of this is transparent once you have 
specified your memory model (in detail), so you only have to 
issue a new PlayerConnection in the main logic of your program 
and can tweak the memory handling elsewhere. That is not the D 
way, from what I can tell from the forum posts so far, because 
new is going to stay tied to one global GC heap. So you have to 
write utility functions… which makes programs less legible.


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Walter Bright via Digitalmars-d

On 5/12/2014 2:28 PM, Xavier Bigand wrote:

All compile time things of D are marvelous.
This with the compile time and the language less error prone make me want D.
I am not sure I need safety so much. It's nice but not mandatory for any of my
projects. The only one which has to be safe is DQuick.


Safety becomes a big concern when you're developing code as part of a team.



  1   2   3   4   5   >