Re: GC memory fragmentation

2021-04-13 Thread Tobias Pankrath via Digitalmars-d-learn

On Tuesday, 13 April 2021 at 12:30:13 UTC, tchaloupka wrote:


Some kind of GC memory dump and analyzer tool as mentioned 
`Diamond` would be of tremendous help to diagnose this..


You could try to get the stack traces of the allocating calls via 
eBPF. Maybe that leads to some new insights.






Re: "this" as default parameter for a constructor.

2021-04-13 Thread Mathias LANG via Digitalmars-d-learn

On Sunday, 11 April 2021 at 20:38:10 UTC, Pierre wrote:

Hi,

I have a class with a reference to the parent object and a 
constructor that has the parent as parameter


class foo {
this ( foo p /* , other params */ ) {
parent = p;
}

foo parent;
}

Of cause, the parent is almost always the object that creates 
the new intance. So


auto f = new foo(this);

I'd like to set "this" ( the creator ) as default argument if 
the constructor :


this ( foo p = this ) {...}

I can't. But however, the function, the argument and the 
current object in body of the constructor are 3 different 
things and the compiler can distinguish each one.


Is there a way to pass the reference of the caller to the 
creator as default argument ?


Depending on what you are trying to do, I would recommend to 
instead go with nested classes if you can. E.g.

```D
class MyClass {
class MyChild {
this (int value) { this.value = value; }
private int value;
}
}

void main ()
{
auto mc = new MyClass;
auto child = mc.new MyChild(42);
}
```

It'll give you an automatic reference to the parent. Of course if 
you are trying to do something like linked list, where all 
elements have the same type, it won't work.
In this case, the `create` approach might be better. You should 
be able to cook something with a template `this` parameter to 
reduce boilerplate.


And regarding allowing `this` as default argument: Definitely no. 
While it could be possible with some stretch (e.g. we'll have to 
delay default parameter semantic to the call site, unlike what is 
currently done, and that would mess with things like overload 
resolutions and template type inference), it wouldn't be sound / 
it'd be very surprising. I for one would expect `this` to be the 
object referencing itself, not the `this` of my caller.


Re: "this" as default parameter for a constructor.

2021-04-13 Thread Pierre via Digitalmars-d-learn

On Monday, 12 April 2021 at 13:14:27 UTC, Kagamin wrote:

class foo {
this ( foo p /* , other params */ ) {
parent = p;
}
foo create() {
return new foo(this);
}
void use() {
foo f = create();
}

foo parent;
}


It's a solution. But my foo class is a base for several 
subclasses which each have their own constructor with different 
parameters; i should define as much create function as subclasses 
constructors. that's unelegant. I tried do solve it with 
templates but couldn't found a satisfatory solution.


As @Jack say " this ( T p = this ){...} " if not supported, not 
yet, maybe it should be ?


By now, the "best" way i found is to write auto f = new foo(this);


Re: How to allow +=, -=, etc operators and keep encapsulation?

2021-04-13 Thread Steven Schveighoffer via Digitalmars-d-learn

On 4/12/21 2:16 PM, Jack wrote:

Give this class:

```d
class A
{
 int X() { return x; }
 int X(int v) { return x = v;}

 private int x;
}
```

I'd like to allow use ```+=```, ```-=``` operators on ```X()``` and keep 
encapsulation. What's a somehow elegant way to do that?


It's really hard to do.

One problem is lifetime management. There is no way to do something like 
`ref`, which does not provide a way to make a copy of the original thing 
(the reference) in @safe code.


But the way I'd tackle it is to write a pseudo-reference wrapper that 
forwards to the getter/setter.


I'm sure there's a mixin template solution that works, I just don't have 
the time to code it out right now.


You can take a look at Mike Franklin's unpublished DIP here for inspiration:
* [DIP conversation](https://github.com/dlang/DIPs/pull/97)
* [DIP 
text](https://github.com/dlang/DIPs/blob/fdd016a16bf1898fda901b9d716f5bcc6021c1a7/DIPs/DIP1xxx-mvf.md)


-Steve


Re: GC memory fragmentation

2021-04-13 Thread tchaloupka via Digitalmars-d-learn

On Monday, 12 April 2021 at 07:03:02 UTC, Sebastiaan Koppe wrote:


We have similar problems, we see memory usage alternate between 
plateauing and then slowly growing. Until it hits the 
configured maximum memory for that job and the orchestrator 
kills it (we run multiple instances and have good failover).


I have reduced the problem by refactoring some of our gc usage, 
but the problem still persists.


On side-note, it would also be good if the GC can be aware of 
the max memory it is allotted so that it knows it needs to do 
more aggressive collections when nearing it.


I knew this must be a more common problem :)

What I've found in the meantime:

* nice writeup of how GC actually works by Vladimir Panteleev - 
https://thecybershadow.net/d/Memory_Management_in_the_D_Programming_Language.pdf
  * described tool (https://github.com/CyberShadow/Diamond) would 
be very helpfull, but I assume it's for D1 and based on some old 
druntime fork :(
* we've implemented log rotation using `std.zlib` (by just 
`foreach (chunk; fin.byChunk(4096).map!(x => c.compress(x))) 
fout.rawWrite(chunk);`)
  * oh boy, don't use `std.zlib.Compress` that way, it allocates 
each chunk and for a large files it creates large GC memory peaks 
that sometimes doesn't go down

  * rewritten using direct `etc.c.zlib` completely out of GC
* currently testing with `--DRT-gcopt=incPoolSize:0` as otherwise 
allocated page size multiplies with number of allocated pools * 
3MB by default
* `profile-gc` is not much helpfull in this case as it only 
prints total allocated memory for each allocation on the 
application exit and as it's a long running service using many 
various libraries it's just hundreds of lines :)
  * I've considered to fork the process periodically, terminate 
it and rename the created profile statistics to at least see the 
differences between the states, but still not sure if it would 
help much
* as I understand the GC it uses different algorithm for small 
allocations and for large objects

  * small (`<=2048`)
* it categorizes objects to fixed set of used sizes and for 
each uses whole memory page as bucket with free list from which 
it reserves memory on request
* when the bucket is full, new page is allocated and 
allocations are provided from that

  * big - similar, but it allocates N pages as a pool

So If I understand it correctly when for example vibe-d 
initializes new fiber on some request, it's handled and fiber can 
be discarded it can easily lead to a scenario when fiber itself 
is allocated in one page, it's filled up during the request 
processing so new page is allocated and when cleaning, bucket 
with fiber cannot be cleaned up as it's added to a `TaskFiber` 
pool (with a fixed size). This way fiber's bucket would never be 
freed and easily never be used anymore during the application 
lifetime.


I'm not so sure if pages of small objects (or large) that are not 
completely empty can be reused as a new bucket or only free pages 
can be reused.


Does anyone has some insight of this?

Some kind of GC memory dump and analyzer tool as mentioned 
`Diamond` would be of tremendous help to diagnose this..