Re: Memory management
On Tuesday, 29 September 2020 at 10:57:07 UTC, novice3 wrote: Naive newbie question: Can we have (in theory) in D lang memory management like V lang? I don't know V so can't be sure, but doing it the same way as in the examples sounds possible. The first two calls are easy. D string literals are stored in the read-only memory that is uploaded together with the machine code before execution. So the naive implementation wouldn't allocate any more than the V version. The third one is more difficult. To allocate without using GC or RC means using an owning container, that releases memory when exiting the scope. While they can be checked against leaking referenced using the `-dip1000` feature, it's practical only when the memory has a single clear owner and there is no need to store references to it elsewhere. The GC and RC are better as a general-purpose solution. On the other hand, D could avoid all allocation in the third example by using ranges. That would cause the printing function to figure out the message as it prints.
Re: Memory management
On Tuesday, 29 September 2020 at 16:03:39 UTC, IGotD- wrote: That little simple example shows that you don't necessarily need to know things in advance in order to have static lifetimes. However, there are examples where there is no possibility for the compiler to infer when the object goes out of scope. Multiple ownership is an obvious example of that. Seems like a mix of Go and Rust... People will probably end up using array indexes instead of references... Just like in Rust.
Re: Memory management
On Tuesday, 29 September 2020 at 10:57:07 UTC, novice3 wrote: Naive newbie question: Can we have (in theory) in D lang memory management like V lang? Quote: https://github.com/vlang/v/blob/master/doc/docs.md#memory-management "V doesn't use garbage collection or reference counting. The compiler cleans everything up during compilation. If your V program compiles, it's guaranteed that it's going to be leak free." Completely avoiding the question about D, all it says in that section is "The strings don't escape draw_text, so they are cleaned up when the function exits. In fact, the first two calls won't result in any allocations at all. These two strings are small, V will use a preallocated buffer for them." That's a toy example. It would be hard to mess that up in C, and I'd expect it to be easy for the compiler to handle it.
Re: Memory management
On Tuesday, 29 September 2020 at 15:47:09 UTC, Ali Çehreli wrote: I am not a language expert but I can't imagine how the compiler knows whether an event will happen at runtime. Imagine a server program allocates memory for a client. Let's say, that memory will be deallocated when the client logs out or the server times that client out. The compiler cannot know either of that will ever happen, right? Ali It doesn't need to know when a certain event happens but based on ownership and program flow. Let's think Rust for a moment. You get a connection and you allocate some metadata about it. Then you put that metadata on a list and the list owns that object. It will stay there as long there is a connection and the program can go and do other things, the list still owns the metadata. After a while the client disconnects and the program finds the metadata in the list and removes it and puts it in a local variable. Some cleanup is one but the metadata is still owned by the local variable and when that variable goes out of scope (basically end of a {} block), then it will deallocated. It's really a combination of ownership and scopes. That little simple example shows that you don't necessarily need to know things in advance in order to have static lifetimes. However, there are examples where there is no possibility for the compiler to infer when the object goes out of scope. Multiple ownership is an obvious example of that.
Re: Memory management
On 9/29/20 3:57 AM, novice3 wrote:> Naive newbie question: > > Can we have (in theory) in D lang memory management like V lang? > > Quote: > https://github.com/vlang/v/blob/master/doc/docs.md#memory-management > > "V doesn't use garbage collection or reference counting. The compiler > cleans everything up during compilation. If your V program compiles, > it's guaranteed that it's going to be leak free." I am not a language expert but I can't imagine how the compiler knows whether an event will happen at runtime. Imagine a server program allocates memory for a client. Let's say, that memory will be deallocated when the client logs out or the server times that client out. The compiler cannot know either of that will ever happen, right? Ali
Memory management
Naive newbie question: Can we have (in theory) in D lang memory management like V lang? Quote: https://github.com/vlang/v/blob/master/doc/docs.md#memory-management "V doesn't use garbage collection or reference counting. The compiler cleans everything up during compilation. If your V program compiles, it's guaranteed that it's going to be leak free."
Re: Just another question about memory management in d from a newbie
On Monday, 17 June 2019 at 20:26:28 UTC, H. S. Teoh wrote: On Mon, Jun 17, 2019 at 07:53:52PM +, Thomas via Digitalmars-d-learn wrote: [...] [...] If x were a heap-allocated object, then your concerns would be true: it would be allocated once every iteration (and also add to the garbage that the GC will have to collect later). [...] Thank you for your exact explanation on how the compiler works inside. That will clear some question that have bothered me. Thomas
Re: Just another question about memory management in d from a newbie
On Mon, Jun 17, 2019 at 07:53:52PM +, Thomas via Digitalmars-d-learn wrote: [...] > int main() > { > foreach(i;0 .. 1) > { > int x; > // do something with x > } > return 0; > } > > Do I understand it right that the variable x will be created 1 > times and destroyed at the end of the scope in each loop ? Or will it > be 1 overwritten by creation ? If x were a heap-allocated object, then your concerns would be true: it would be allocated once every iteration (and also add to the garbage that the GC will have to collect later). However, in this case x is an int, which is a value type. This means it will be allocated on the stack, and allocation is as trivial as bumping the stack pointer (practically zero cost), and deallocation is as simple as bumping the stack pointer the other way. In fact, it doesn't even have to bump the stack pointer between iterations, since it's obvious from code analysis that x will always be allocated on the same position of the stack relative to the function's call frame, so in the generated machine code it can be as simple as just using that memory location directly, and reusing the same location between iterations. > I mean does it not cost the CPU some time to allocate (I know were > talking here of nsec) but work is work. As far I know from my school > days in assembler, allocation of memory is one of the most expensive > instructions on the cpu. Activate memory block, find some free place, > reserve, return pointer to caller, and so on.. That's not entirely accurate. It depends on how the allocation is done. Allocation on the stack is extremely cheap,and consists literally of adding some number (the size of the allocation) to a register (the stack pointer) -- you cannot get much simpler than that. Also, memory allocation on the heap is not "one of the most expensive instructions". The reason it's expensive is because it's not a single instruction, but an entire subroutine of instructions to manage the heap. There is no CPU I know of that has a built-in instruction for heap allocation. > In my opinion this version should perform a little better: > > int main() > { > int x; > foreach(i;0 .. 1) > { > x = 0; // reinitialize > // do something with x > } > return 0; > } > > Or do I miss something and there is an optimization by the compiler to > avoid recreating/destroying the variable x 1 times ? [...] What you wrote here is exactly how the compiler will emit the machine code given your first code example above. Basically all modern optimizing compilers will implement it this way, and you never have to worry about stack allocation being slow. The only time you will have a performance problem is if x is a heap-allocated object. In *that* case you will want to look into reusing previous instances of the object between iterations. But if x is a value type (int, or struct, etc.), you don't have to worry about this at all. The first version of the code you have above is the preferred one, because it makes the code clearer. On a related note, in modern programming languages generally you should be more concerned about the higher-level meaning of the code than worry about low-level details like how instructions are generated, because generally speaking the machine code generated by the compiler is highly transformed from the original source code, and generally will not have a 1-to-1 mapping to the higher-level logical meaning of the code, i.e., the first version of the code *logically* allocates x at the beginning of each iteration of the loop and deallocates it at the end, but the actual generated machine code elides all of that because the compiler's optimizer can easily see that it's a stack allocation that always ends up in the same place, so none of the allocation/deallocation actually needs to be represented as-is in the machine code. On another note, for performance-related concerns the general advice these days is to write something in the most straightforward, logical way first, and then if the performance is not good enough, **use a profiler** to identify where the hotspots are, and optimize those. Trying to optimize before you have real-world, profiler data that your code is a hotspot is premature optimization, widely regarded as evil because it usually leads to overly-convoluted code that's hard to understand and maintain, and often actually *slower* because the way you expressed the meaning of the code has become so obfuscated that the optimizer can't figure out what you're actually trying to do, so it gives up and doesn't even try to apply any optimizations that may have benefitted the code. (Of course, the above is with the caveat that writing "straightforward" code only holds up to a certain extent; when it comes to algorithms, for example, no compiler is going to change an O(n^2) algorithm into an O(log n) one; you have to select the appropriate algorithm in
Re: Just another question about memory management in d from a newbie
First, thank you for your fast reply! On Monday, 17 June 2019 at 20:00:34 UTC, Adam D. Ruppe wrote: No, the compiler will generate code to reuse the same thing each loop. Does this code also work on complex types like structs ?
Re: Just another question about memory management in d from a newbie
On Monday, 17 June 2019 at 19:53:52 UTC, Thomas wrote: Do I understand it right that the variable x will be created 1 times and destroyed at the end of the scope in each loop ? Or will it be 1 overwritten by creation ? No, the compiler will generate code to reuse the same thing each loop. In my opinion this version should perform a little better: That's actually slightly *less* efficient (in some situations) because it doesn't allow the compiler to reuse the memory for `x` after the loop has exited. But in both cases, the compiler will just set aside a little bit of local space (or just a cpu scratch register) and use it repeatedly. It is totally "free".
Just another question about memory management in d from a newbie
Hello! First my background: C++ and Java ages ago. Since then only PLSQL. Now learning D just for fun and personal education on time to time and very pleased about it :-) Now I have to ask a question here, because I could not find a corresponding answer for it. Or I am unable to find it :-) I was wondering about the memory system in D (and other C like languages) about the handling of the memory allocation overhead. Just an example that I have seen many times: int main() { foreach(i;0 .. 1) { int x; // do something with x } return 0; } Do I understand it right that the variable x will be created 1 times and destroyed at the end of the scope in each loop ? Or will it be 1 overwritten by creation ? I mean does it not cost the CPU some time to allocate (I know were talking here of nsec) but work is work. As far I know from my school days in assembler, allocation of memory is one of the most expensive instructions on the cpu. Activate memory block, find some free place, reserve, return pointer to caller, and so on.. In my opinion this version should perform a little better: int main() { int x; foreach(i;0 .. 1) { x = 0; // reinitialize // do something with x } return 0; } Or do I miss something and there is an optimization by the compiler to avoid recreating/destroying the variable x 1 times ? I know that version 1 is more secure because there will be no value waste after each loop we could stumble onto, but that is not my question here. And I know that were talking about really small cpu usage compared to what an app should do. But when it is to look on performance like games or gui (and there are a lot of examples like version 1 out there) then I have to ask myself if it is not just a waste of cpu time ? Or is it a styling of code thing ? Thank you for your time! Greetings from Austria Thomas
Re: Memory management by interfacing C/C++
On Monday, 29 April 2019 at 14:38:54 UTC, 9il wrote: On Saturday, 27 April 2019 at 22:25:58 UTC, Ferhat Kurtulmuş wrote: [...] Hello Ferhat, You can use RCArray!T or Slice!(RCI!T) [1, 2] as common thread safe @nogc types for D and C++ code. See also integration C++ example [3] and C++ headers [4]. RCArray (fixed length) [1] http://mir-algorithm.libmir.org/mir_rc_array.html RCSlice (allows to get subslices) [2] http://mir-algorithm.libmir.org/mir_ndslice_allocation.html#rcslice C++ integration example [3] https://github.com/libmir/mir-algorithm/tree/master/cpp_example C++ headers [4] https://github.com/libmir/mir-algorithm/tree/master/include/mir An opencv d binding using ndslice substituting cv::Mat would be useful like opencv-python using numpy. However, I started opencvd about a month ago, and I am very new to d. For now, I am the only contributor with zero mir.ndslice experience. When I gain more experience with ndslice, I would try it as substitution for cv::Mat. Thank you for links 9il. I will take a look at them.
Re: Memory management by interfacing C/C++
On Saturday, 27 April 2019 at 22:25:58 UTC, Ferhat Kurtulmuş wrote: Hi, I am wrapping some C++ code for my personal project (opencvd), and I am creating so many array pointers at cpp side and containing them in structs. I want to learn if I am leaking memory like crazy, although I am not facing crashes so far. Is GC of D handling things for me? Here is an example: ``` //declaration in d struct IntVector { int* val; int length; } // in cpp typedef struct IntVector { int* val; int length; } IntVector; // cpp function returning a struct containing an array pointer allocated with "new" op. IntVector Subdiv2D_GetLeadingEdgeList(Subdiv2D sd){ std::vector iv; sd->getLeadingEdgeList(iv); int *cintv = new int[iv.size()]; // I don't call delete anywhere? for(size_t i=0; i < iv.size(); i++){ cintv[i] = iv[i]; } IntVector ret = {cintv, (int)iv.size()}; return ret; }; // call extern c function in d: extern (C) IntVector Subdiv2D_GetLeadingEdgeList(Subdiv2d sd); int[] getLeadingEdgeList(){ IntVector intv = Subdiv2D_GetLeadingEdgeList(this); int[] ret = intv.val[0..intv.length]; // just D magic. Still no delete anywhere! return ret; } ``` The question is now: what will happen to "int *cintv" which is allocated with new operator in cpp code? I have many code similar in the project, but I have not encounter any problem so far even in looped video processings. Is GC of D doing deallocation automagically? https://github.com/aferust/opencvd Hello Ferhat, You can use RCArray!T or Slice!(RCI!T) [1, 2] as common thread safe @nogc types for D and C++ code. See also integration C++ example [3] and C++ headers [4]. RCArray (fixed length) [1] http://mir-algorithm.libmir.org/mir_rc_array.html RCSlice (allows to get subslices) [2] http://mir-algorithm.libmir.org/mir_ndslice_allocation.html#rcslice C++ integration example [3] https://github.com/libmir/mir-algorithm/tree/master/cpp_example C++ headers [4] https://github.com/libmir/mir-algorithm/tree/master/include/mir
Re: Memory management by interfacing C/C++
On Monday, 29 April 2019 at 00:53:34 UTC, Paul Backus wrote: On Sunday, 28 April 2019 at 23:10:24 UTC, Ferhat Kurtulmuş wrote: You are right. I am rewriting the things using mallocs, and will use core.stdc.stdlib.free on d side. I am not sure if I can use core.stdc.stdlib.free to destroy arrays allocated with new op. core.stdc.stdlib.free is (as the name suggests) the standard C `free` function. As such, it can only be used to free memory allocated by the standard C functions `malloc`, `calloc`, and `realloc`. This is the same in D as it is in C and C++. Thank you. It is now like: /* c/cpp side */ extern (C) void deleteArr(void* arr); void deleteArr(void* arr){ delete[] arr; } struct IntVector Subdiv2D_GetLeadingEdgeList(Subdiv2D sd){ std::vector iv; sd->getLeadingEdgeList(iv); int *cintv = new int[iv.size()]; for(size_t i=0; i < iv.size(); i++){ cintv[i] = iv[i]; } IntVector ret = {cintv, (int)iv.size()}; return ret; }; /* c/cpp side */ ... int[] getLeadingEdgeList(){ // d function IntVector intv = Subdiv2D_GetLeadingEdgeList(this); int[] ret = intv.val[0..intv.length].dup; deleteArr(intv.val); return ret; } ...
Re: Memory management by interfacing C/C++
On Sunday, 28 April 2019 at 23:10:24 UTC, Ferhat Kurtulmuş wrote: You are right. I am rewriting the things using mallocs, and will use core.stdc.stdlib.free on d side. I am not sure if I can use core.stdc.stdlib.free to destroy arrays allocated with new op. core.stdc.stdlib.free is (as the name suggests) the standard C `free` function. As such, it can only be used to free memory allocated by the standard C functions `malloc`, `calloc`, and `realloc`. This is the same in D as it is in C and C++.
Re: Memory management by interfacing C/C++
On Sunday, 28 April 2019 at 03:54:17 UTC, Paul Backus wrote: On Saturday, 27 April 2019 at 22:25:58 UTC, Ferhat Kurtulmuş wrote: Hi, I am wrapping some C++ code for my personal project (opencvd), and I am creating so many array pointers at cpp side and containing them in structs. I want to learn if I am leaking memory like crazy, although I am not facing crashes so far. Is GC of D handling things for me? Here is an example: [...] The question is now: what will happen to "int *cintv" which is allocated with new operator in cpp code? I have many code similar in the project, but I have not encounter any problem so far even in looped video processings. Is GC of D doing deallocation automagically? https://github.com/aferust/opencvd D's GC only collects memory allocated with D's `new` operator. Memory allocated by C++'s `new` operator must be freed by C++'s `delete` operator. You are right. I am rewriting the things using mallocs, and will use core.stdc.stdlib.free on d side. I am not sure if I can use core.stdc.stdlib.free to destroy arrays allocated with new op.
Re: Memory management by interfacing C/C++
On Saturday, 27 April 2019 at 22:25:58 UTC, Ferhat Kurtulmuş wrote: Hi, I am wrapping some C++ code for my personal project (opencvd), and I am creating so many array pointers at cpp side and containing them in structs. I want to learn if I am leaking memory like crazy, although I am not facing crashes so far. Is GC of D handling things for me? Here is an example: [...] The question is now: what will happen to "int *cintv" which is allocated with new operator in cpp code? I have many code similar in the project, but I have not encounter any problem so far even in looped video processings. Is GC of D doing deallocation automagically? https://github.com/aferust/opencvd D's GC only collects memory allocated with D's `new` operator. Memory allocated by C++'s `new` operator must be freed by C++'s `delete` operator.
Memory management by interfacing C/C++
Hi, I am wrapping some C++ code for my personal project (opencvd), and I am creating so many array pointers at cpp side and containing them in structs. I want to learn if I am leaking memory like crazy, although I am not facing crashes so far. Is GC of D handling things for me? Here is an example: ``` //declaration in d struct IntVector { int* val; int length; } // in cpp typedef struct IntVector { int* val; int length; } IntVector; // cpp function returning a struct containing an array pointer allocated with "new" op. IntVector Subdiv2D_GetLeadingEdgeList(Subdiv2D sd){ std::vector iv; sd->getLeadingEdgeList(iv); int *cintv = new int[iv.size()]; // I don't call delete anywhere? for(size_t i=0; i < iv.size(); i++){ cintv[i] = iv[i]; } IntVector ret = {cintv, (int)iv.size()}; return ret; }; // call extern c function in d: extern (C) IntVector Subdiv2D_GetLeadingEdgeList(Subdiv2d sd); int[] getLeadingEdgeList(){ IntVector intv = Subdiv2D_GetLeadingEdgeList(this); int[] ret = intv.val[0..intv.length]; // just D magic. Still no delete anywhere! return ret; } ``` The question is now: what will happen to "int *cintv" which is allocated with new operator in cpp code? I have many code similar in the project, but I have not encounter any problem so far even in looped video processings. Is GC of D doing deallocation automagically? https://github.com/aferust/opencvd
Re: Block statements and memory management
On Saturday, 16 March 2019 at 15:53:26 UTC, Johan Engelen wrote: On Saturday, 16 March 2019 at 03:47:43 UTC, Murilo wrote: Does anyone know if when I create a variable inside a scope as in {int a = 10;} it disappears complete from the memory when the scope finishes? Or does it remain in some part of the memory? I am thinking of using scopes to make optimized programs that consume less memory. Others have made good points in this thread, but what is missing is that indeed scopes _can_ be used beneficially to reduce memory footprint. -Johan I would like to thank everyone for your help, those informations were very helpful.
Re: Block statements and memory management
On Saturday, 16 March 2019 at 03:47:43 UTC, Murilo wrote: Does anyone know if when I create a variable inside a scope as in {int a = 10;} it disappears complete from the memory when the scope finishes? Or does it remain in some part of the memory? I am thinking of using scopes to make optimized programs that consume less memory. Others have made good points in this thread, but what is missing is that indeed scopes _can_ be used beneficially to reduce memory footprint. I recommend playing with this code on d.godbolt.org: ``` void func(ref int[10] a); // important detail: pointer void foo() { { int[10] a; func(a); } { int[10] b; func(b); } } ``` Because the variable is passed by reference (pointer), the optimizer cannot merge the storage space of `a` and `b` _unless_ scope information is taken into account. Without taking scope into account, the first `func` call could store the pointer to `a` somewhere for later use in the second `func` call for example. However, because of scope, using `a` after its scope has ended is UB, and thus variables `a` and `b` can be used. GDC uses scope information for variable lifetime optimization, but LDC and DMD both do not. For anyone interested in working on compilers: adding variable scope lifetime to LDC (not impossibly hard) would be a nice project and be very valuable. -Johan
Re: Block statements and memory management
On Sat, Mar 16, 2019 at 01:21:02PM +0100, spir via Digitalmars-d-learn wrote: > On 16/03/2019 11:19, Dennis via Digitalmars-d-learn wrote: [...] > > In any case, for better memory efficiency I'd consider looking at > > reducing dynamic allocations such as new or malloc. Memory on the > > stack is basically free compared to that, so even if adding lots of > > braces to your code reduces stack memory, chances are it's a low > > leverage point. > > > > [1] https://en.wikipedia.org/wiki/Live_variable_analysis > > [2] https://en.wikipedia.org/wiki/Data-flow_analysis > > Just to add a bit on what has been said: > * Register allocation (see wikipedia) is a well-researched area. > * By coding that way, you force the compiler to optimise *a certain way* > which may prevent it to perform other, more relevant optimisations. > * You cannot beat the knowledge in that domain, it is simply too big > and complex, just be confident. [...] And to add even more to that: before embarking on micro-optimizations of this sort, always check with a profiler whether or not the bottleneck is even in that part of the code. Often I find myself very surprised at where the real bottleneck is, which is often nowhere near where I thought it should be. Also, check with a memory profiler to find out where the real heavy memory usage points are. It may not be where you thought it was. Generally speaking, in this day and age of highly-optimizing compilers, premature optimization is the root of all evils, because it uglifies your code and makes it hard to maintain for little or no gain, and sometimes for *negative* gain, because by writing code in an unusual way, you confuse the optimizer as to your real intent, thereby reducing its effectiveness at producing optimized code. Don't optimize until you have verified with a profiler where your bottlenecks are. It takes a lot of time and effort to write code this way, so make it count by applying it where it actually matters. Of course, this assumes you use a compiler with a powerful-enough optimizer. I recommend ldc/gdc if performance is important to you. Dmd compiles somewhat faster, but at the cost of poorer codegen. T -- The early bird gets the worm. Moral: ewww...
Re: Block statements and memory management
On 16/03/2019 11:19, Dennis via Digitalmars-d-learn wrote: On Saturday, 16 March 2019 at 03:47:43 UTC, Murilo wrote: Does anyone know if when I create a variable inside a scope as in {int a = 10;} it disappears complete from the memory when the scope finishes? Or does it remain in some part of the memory? I am thinking of using scopes to make optimized programs that consume less memory. In general, you want variables to have no larger scope than needed, so in large functions reducing the scope may be useful. When it comes to efficiency however, doing that is neither necessary nor sufficient for the compiler to re-use registers / stack space. I looked at the assembly output of DMD for this: ``` void func(int a); void main() { { int a = 2; func(a); } { int b = 3; func(b); } } ``` Without optimizations (the -O flag), it stores a and b on different places in the stack. With optimizations, the values of a and b (2 and 3) are simply loaded in the EDI register before the call. Removing the braces doesn't change anything about that. The compiler does live variable analysis [1] as well as data-flow analysis [2] to figure out that it's only needed to load the values 2 and 3 just before the function call. This is just a trivial example, but the same applies to larger functions. In any case, for better memory efficiency I'd consider looking at reducing dynamic allocations such as new or malloc. Memory on the stack is basically free compared to that, so even if adding lots of braces to your code reduces stack memory, chances are it's a low leverage point. [1] https://en.wikipedia.org/wiki/Live_variable_analysis [2] https://en.wikipedia.org/wiki/Data-flow_analysis Just to add a bit on what has been said: * Register allocation (see wikipedia) is a well-researched area. * By coding that way, you force the compiler to optimise *a certain way* which may prevent it to perform other, more relevant optimisations. * You cannot beat the knowledge in that domain, it is simply too big and complex, just be confident. diniz
Re: Block statements and memory management
On Saturday, 16 March 2019 at 03:47:43 UTC, Murilo wrote: Does anyone know if when I create a variable inside a scope as in {int a = 10;} it disappears complete from the memory when the scope finishes? Or does it remain in some part of the memory? I am thinking of using scopes to make optimized programs that consume less memory. In general, you want variables to have no larger scope than needed, so in large functions reducing the scope may be useful. When it comes to efficiency however, doing that is neither necessary nor sufficient for the compiler to re-use registers / stack space. I looked at the assembly output of DMD for this: ``` void func(int a); void main() { { int a = 2; func(a); } { int b = 3; func(b); } } ``` Without optimizations (the -O flag), it stores a and b on different places in the stack. With optimizations, the values of a and b (2 and 3) are simply loaded in the EDI register before the call. Removing the braces doesn't change anything about that. The compiler does live variable analysis [1] as well as data-flow analysis [2] to figure out that it's only needed to load the values 2 and 3 just before the function call. This is just a trivial example, but the same applies to larger functions. In any case, for better memory efficiency I'd consider looking at reducing dynamic allocations such as new or malloc. Memory on the stack is basically free compared to that, so even if adding lots of braces to your code reduces stack memory, chances are it's a low leverage point. [1] https://en.wikipedia.org/wiki/Live_variable_analysis [2] https://en.wikipedia.org/wiki/Data-flow_analysis
Re: Block statements and memory management
On Saturday, 16 March 2019 at 03:47:43 UTC, Murilo wrote: Does anyone know if when I create a variable inside a scope as in {int a = 10;} it disappears complete from the memory when the scope finishes? Or does it remain in some part of the memory? I am thinking of using scopes to make optimized programs that consume less memory. I'd recommend against these sorts of micro-optimizations. Compilers are every good at doing this kind of thing manually so you don't have to worry about it and can concentrate on the actual logic of your program.
Re: Block statements and memory management
On Saturday, 16 March 2019 at 03:47:43 UTC, Murilo wrote: Does anyone know if when I create a variable inside a scope as in {int a = 10;} it disappears complete from the memory when the scope finishes? Or does it remain in some part of the memory? I am thinking of using scopes to make optimized programs that consume less memory. It depends on how the compiler translates and optimizes the code. An integer variable like `a` in your example might never exist in memory at all, if the compiler can allocate a register for it until it goes out of scope. The easiest way to find out is to look at a disassembly of the compiled code.
Block statements and memory management
Does anyone know if when I create a variable inside a scope as in {int a = 10;} it disappears complete from the memory when the scope finishes? Or does it remain in some part of the memory? I am thinking of using scopes to make optimized programs that consume less memory.
Re: Region-based memory management and GC?
On Saturday, 30 September 2017 at 07:41:21 UTC, Igor wrote: On Friday, 29 September 2017 at 22:13:01 UTC, Jon Degenhardt wrote: Have there been any investigations into using region-based memory management (aka memory arenas) in D, possibly in conjunction with GC allocated memory? Sounds like just want to use https://dlang.org/phobos/std_experimental_allocator_building_blocks_region.html. Wow, thanks, I did not know about this. Will check it out.
Re: Region-based memory management and GC?
On Friday, 29 September 2017 at 22:13:01 UTC, Jon Degenhardt wrote: Have there been any investigations into using region-based memory management (aka memory arenas) in D, possibly in conjunction with GC allocated memory? This would be a very speculative idea, but it'd be interesting to know if there have been looks at this area. My own interest is request-response applications, where memory allocated as part of a specific request can be discarded as a single block when the processing of that request completes, without running destructors. I've also seen some papers describing GC systems targeting big data platforms that incorporate this idea. eg. http://www.ics.uci.edu/~khanhtn1/papers/osdi16.pdf --Jon Sounds like just want to use https://dlang.org/phobos/std_experimental_allocator_building_blocks_region.html.
Region-based memory management and GC?
Have there been any investigations into using region-based memory management (aka memory arenas) in D, possibly in conjunction with GC allocated memory? This would be a very speculative idea, but it'd be interesting to know if there have been looks at this area. My own interest is request-response applications, where memory allocated as part of a specific request can be discarded as a single block when the processing of that request completes, without running destructors. I've also seen some papers describing GC systems targeting big data platforms that incorporate this idea. eg. http://www.ics.uci.edu/~khanhtn1/papers/osdi16.pdf --Jon
Re: Best memory management D idioms
On Wednesday, 8 March 2017 at 06:42:40 UTC, ag0aep6g wrote: [...] Yes and yes. GCAllocator.allocate calls core.memory.GC.malloc with does pretty much the same thing as the builtin `new`. Nitpicking: `new` is typed (i.e. allocation+construction), `malloc` and `allocate` are not (only allocation). If you want allocation *and* construction with the new Allocator interface, you'll want to use the make[1] (and dispose[2] for the reverse path) template function; and they are a superset of `new`: You cannot, e.g., construct a delegate with new, but you can with `make`. [1] https://dlang.org/phobos/std_experimental_allocator.html#.make [2] https://dlang.org/phobos/std_experimental_allocator.html#.dispose
Re: Best memory management D idioms
On 03/08/2017 02:15 AM, XavierAP wrote: I see the default allocator is the same GC heap used by 'new'. Just for my learning curiosity, does this mean that if I theAllocator.make() something and then forget to dispose() it, it will be garbage collected the same once no longer referenced? And so are these objects traversed by the GC? Yes and yes. GCAllocator.allocate calls core.memory.GC.malloc with does pretty much the same thing as the builtin `new`. One difference might be with preciseness. `new` can take the type into account and automatically mark the allocation as NO_SCAN. I'm not sure if GCAllocator can do that. Maybe if you use `make`/`makeArray` to make the allocations. I've also looked at mallocator, [2] can it be used in some way to provide an allocator instead of the default theAllocator? As far as I can tell mallocator is not enough to implement an IAllocator, is there a reason, or where's the rest, am I missing it? To make an IAllocator, use std.experimental.allocator.allocatorObject. The example in the documentation shows how it's done. https://dlang.org/phobos/std_experimental_allocator.html#.allocatorObject
Re: Best memory management D idioms
On Tuesday, 7 March 2017 at 18:21:43 UTC, Eugene Wissner wrote: To avoid this from the beginning, it may be better to use allocators. You can use "make" and "dispose" from std.experimental.allocator the same way as New/Delete. OK I've been reading on std.experimental.allocator; it looks really powerful and general, more than I need. I see the potential but I don't really have the knowledge to tweak memory management, and the details of the "building blocks" are well beyond me. But even if I don't go there, I guess it's a good thing that I can change my program's allocator by changing one single line or version assigning theAllocator, and benchmark the results among different possibilities. I see the default allocator is the same GC heap used by 'new'. Just for my learning curiosity, does this mean that if I theAllocator.make() something and then forget to dispose() it, it will be garbage collected the same once no longer referenced? And so are these objects traversed by the GC? I've also looked at mallocator, [2] can it be used in some way to provide an allocator instead of the default theAllocator? As far as I can tell mallocator is not enough to implement an IAllocator, is there a reason, or where's the rest, am I missing it? [1] https://dlang.org/phobos/std_experimental_allocator.html [2] https://dlang.org/phobos/std_experimental_allocator_mallocator.html
Re: Best memory management D idioms
On Tuesday, 7 March 2017 at 20:15:37 UTC, XavierAP wrote: On Tuesday, 7 March 2017 at 18:21:43 UTC, Eugene Wissner wrote: To avoid this from the beginning, it may be better to use allocators. You can use "make" and "dispose" from std.experimental.allocator the same way as New/Delete. Thanks! looking into it. Does std.experimental.allocator have a leak debugging tool like dlib's printMemoryLog()? Yes, but printMemoryLog is anyway useful only for simple searching for memory leaks. For the advanced debugging it is anyway better to learn some memory debugger or profiler.
Re: Best memory management D idioms
On Tuesday, 7 March 2017 at 18:21:43 UTC, Eugene Wissner wrote: To avoid this from the beginning, it may be better to use allocators. You can use "make" and "dispose" from std.experimental.allocator the same way as New/Delete. Thanks! looking into it. Does std.experimental.allocator have a leak debugging tool like dlib's printMemoryLog()?
Re: Best memory management D idioms
On Tuesday, 7 March 2017 at 17:37:43 UTC, XavierAP wrote: On Tuesday, 7 March 2017 at 16:51:23 UTC, Kagamin wrote: There's nothing like that of C++. Don't you think New/Delete from dlib.core.memory fills the bill? for C++ style manual dynamic memory management? It looks quite nice to me, being no more than a simple malloc wrapper with constructor/destructor calling and type safety. Plus printMemoryLog() for debugging, much easier than valgrind. do you want to manage non-memory resources with these memory management mechanisms too? I wasn't thinking about this now, but I'm sure the need will come up. Yes. For simple memory management New/Delete would be enough. But you depend on your libc in this case, that is mostly not a problem. From experience it wasn't enough for some code bases, so the C-world invented some work arounds: 1) Link to an another libc providing a different malloc/free implementations 2) Use macros that default to the libc's malloc/free, but can be set at compile time to an alternative implementation (mbedtls uses for example mbedtls_malloc, mbedtls_calloc and mbedtls_free macros) To avoid this from the beginning, it may be better to use allocators. You can use "make" and "dispose" from std.experimental.allocator the same way as New/Delete. I tried to introduce the allocators in dlib but it failed, because dlib is difficult to modify because of other projects based on it (although to be honest it was mostly a communication problem as it often happens), so I started a similar lib from scratch.
Re: Best memory management D idioms
On Tuesday, 7 March 2017 at 16:51:23 UTC, Kagamin wrote: There's nothing like that of C++. Don't you think New/Delete from dlib.core.memory fills the bill? for C++ style manual dynamic memory management? It looks quite nice to me, being no more than a simple malloc wrapper with constructor/destructor calling and type safety. Plus printMemoryLog() for debugging, much easier than valgrind. do you want to manage non-memory resources with these memory management mechanisms too? I wasn't thinking about this now, but I'm sure the need will come up.
Re: Best memory management D idioms
On Sunday, 5 March 2017 at 20:54:06 UTC, XavierAP wrote: What I want to learn (not debate) is the currently available types, idioms etc. whenever one wants deterministic memory management. There's nothing like that of C++. Currently you have Unique, RefCounted, scoped and individual people efforts on this. BTW, do you want to manage non-memory resources with these memory management mechanisms too?
Re: Best memory management D idioms
On Monday, 6 March 2017 at 08:26:53 UTC, Eugene Wissner wrote: The memory management in D is becoming a mess. I am aware this is a hot topic, hence the opening joke. But I just want to learn what toolboxes are currently available, not really discuss how adequate they are, or how often GC is adequate enough. Neither how much % GC-haram Phobos or other libraries are internally atm. There are several threads already discussing this, which I've browsed. My impression so far is that dlib's New/Delete is the most straightforward or efficient tool, and it can kind of cover all the bases (and is GC-halal), as far as I can tell in advance. Plus it has printMemoryLog() as a bonus, which is already better than C++ new/delete. Just an observation that it doesn't provide an option to allocate an uninitialized array, considering that this module is already targeting performance applications. Not sure if I would use any other tool (besides GC itself). I'm curious about Unique but it's not fully clear to me what happens (regarding lifetime, deterministic destruction, GC monitoring) when you need to call "new" to use it.
Re: Best memory management D idioms
On Sunday, 5 March 2017 at 20:54:06 UTC, XavierAP wrote: I was going to name this thread "SEX!!" but then I thought "best memory management" would get me more reads ;) Anyway now that I have your attention... What I want to learn (not debate) is the currently available types, idioms etc. whenever one wants deterministic memory management. Please do not derail it debating how often deterministic should be preferred to GC or not. Just, whenever one should happen to require it, what are the available means? And how do they compare in your daily use, if any? If you want to post your code samples using special types for manual memory management, that would be great. AFAIK (from here on please correct me wherever I'm wrong) the original D design was, if you don't want to use GC, then malloc() and free() are available from std.c. Pretty solid. I guess the downside is less nice syntax than new/delete, and having to check the returned value instead of exceptions. I guess these were the original reasons why C++ introduced new/delete but I've never been sure. Then from this nice summary [1] I've learned about the existence of new libraries and Phobos modules: std.typecons, Dlib, and std.experimental.allocator. Sadly in this department D starts to look a bit like C++ in that there are too many possible ways to do one certain thing, and what's worse none of them is the "standard" way, and none of them is deprecated atm either. I've just taken a quick look at them, and I was wondering how many people prefer either, and what are their reasons and their experience. dlib.core.memory and dlib.memory lack documentation, but according to this wiki page [2] I found, dlib defines New/Delete substitutes without GC a-la-C++, with the nice addition of a "memoryDebug" version (how ironclad is this to debug every memory leak?) From std.typecons what caught my eye first is scoped() and Unique. std.experimental.allocator sounded quite, well, experimental or advanced, so I stopped reading before trying to wrap my head around all of it. Should I take another look? scoped() seems to work nicely for auto variables, and if I understood it right, not only it provides deterministic management, but allocates statically/in the stack, so it is like C++ without pointers right? Looking into the implementation, I just hope most of that horribly unsafe casting can be taken care of at compile time. The whole thing looks a bit obscure under the hood and in its usage: auto is mandatory or else allocation doesn't hold, but even reflection cannot tell the different at runtime between T and typeof(scoped!T) //eew. Unfortunately this also makes scoped() extremely unwieldy for member variables: their type has to be explicitly declared as typeof(scoped!T), and they cannot be initialized at the declaration. To me this looks like scoped() could be useful in some cases but it looks hardly recommendable to the same extent as the analogous C++ idiom. Then Unique seems to be analogous to C++ unique_ptr, fair enough... Or are there significant differences? Your experience? And am I right in assuming that scoped() and Unique (and dlib.core.memory) prevent the GC from monitoring the memory they manage (just like malloc?), thus also saving those few cycles? This I haven't seen clearly stated in the documentation. [1] http://forum.dlang.org/post/stohzfatiwjzemqoj...@forum.dlang.org [2] https://github.com/gecko0307/dlib/wiki/Manual-Memory-Management The memory management in D is becoming a mess. Yes, D was developed with the GC in mind and the attempts to make it usable without GC came later. Now std has functions that allocate with GC, there're containers that use malloc/free directly or reference counting for the internal storage, and there is std.experimental.allocator. And it doesn't really get better. There is also some effort to add reference counting directly into the language. I really fear we will have soon signatures like "void myfunc() @safe @nogc @norc..". Stuff like RefCounted or Unique are similar to C++ analogues, but not the same. They throw exceptions allocated with GC, factory methods (like Unique.create) use GC to create the object. Also dlib's memory management is a nightmare: some stuff uses "new" and GC, some "New" and "Delete". Some functions allocate memory and returns it and you never know if it will be collected or you should free it, you have to look into the source code each time to see what the function does internally, otherwise you will end up with memory leaks or segmentation faults. dlib has a lot of outdated code that isn't easy to update.
Best memory management D idioms
I was going to name this thread "SEX!!" but then I thought "best memory management" would get me more reads ;) Anyway now that I have your attention... What I want to learn (not debate) is the currently available types, idioms etc. whenever one wants deterministic memory management. Please do not derail it debating how often deterministic should be preferred to GC or not. Just, whenever one should happen to require it, what are the available means? And how do they compare in your daily use, if any? If you want to post your code samples using special types for manual memory management, that would be great. AFAIK (from here on please correct me wherever I'm wrong) the original D design was, if you don't want to use GC, then malloc() and free() are available from std.c. Pretty solid. I guess the downside is less nice syntax than new/delete, and having to check the returned value instead of exceptions. I guess these were the original reasons why C++ introduced new/delete but I've never been sure. Then from this nice summary [1] I've learned about the existence of new libraries and Phobos modules: std.typecons, Dlib, and std.experimental.allocator. Sadly in this department D starts to look a bit like C++ in that there are too many possible ways to do one certain thing, and what's worse none of them is the "standard" way, and none of them is deprecated atm either. I've just taken a quick look at them, and I was wondering how many people prefer either, and what are their reasons and their experience. dlib.core.memory and dlib.memory lack documentation, but according to this wiki page [2] I found, dlib defines New/Delete substitutes without GC a-la-C++, with the nice addition of a "memoryDebug" version (how ironclad is this to debug every memory leak?) From std.typecons what caught my eye first is scoped() and Unique. std.experimental.allocator sounded quite, well, experimental or advanced, so I stopped reading before trying to wrap my head around all of it. Should I take another look? scoped() seems to work nicely for auto variables, and if I understood it right, not only it provides deterministic management, but allocates statically/in the stack, so it is like C++ without pointers right? Looking into the implementation, I just hope most of that horribly unsafe casting can be taken care of at compile time. The whole thing looks a bit obscure under the hood and in its usage: auto is mandatory or else allocation doesn't hold, but even reflection cannot tell the different at runtime between T and typeof(scoped!T) //eew. Unfortunately this also makes scoped() extremely unwieldy for member variables: their type has to be explicitly declared as typeof(scoped!T), and they cannot be initialized at the declaration. To me this looks like scoped() could be useful in some cases but it looks hardly recommendable to the same extent as the analogous C++ idiom. Then Unique seems to be analogous to C++ unique_ptr, fair enough... Or are there significant differences? Your experience? And am I right in assuming that scoped() and Unique (and dlib.core.memory) prevent the GC from monitoring the memory they manage (just like malloc?), thus also saving those few cycles? This I haven't seen clearly stated in the documentation. [1] http://forum.dlang.org/post/stohzfatiwjzemqoj...@forum.dlang.org [2] https://github.com/gecko0307/dlib/wiki/Manual-Memory-Management
Re: Classes and Structs, Memory management questions
On Friday, 2 September 2016 at 08:43:45 UTC, dom wrote: Since garbage collection is a very nice feature that I wouldn't wanna miss for certain scenarios I think D should give us the opportunity to determine how an object is allocated. In the example above putting it on the stack is probably a good idea. Having a self managed reference to the heap can be good too if manual memory management is wanted. Or of course let the GC manage it ( i love it for prototyping code and also as a D beginner it is beneficial that i just dont need to care about memory management). Could somebody explain to me if this is seen as a problem why/whynot and how I should address that kind of issues in my code? You can allocate class instance on stack: https://dlang.org/phobos/std_typecons.html#.scoped
Re: Classes and Structs, Memory management questions
On Friday, 2 September 2016 at 08:43:45 UTC, dom wrote: from what i got Classes are always reference types and structs are value types in D. i am wondering why that is. for me the main difference between classes and structs is not how they are allocated, but that one has inhertiance, and the other hasn't. It depends by language you're using. In C++, for example you can inherit both! The only difference AFAIK is that c++ structs has public default inheritance vs private for classes. Andrea
Re: Classes and Structs, Memory management questions
On Friday, 2 September 2016 at 08:43:45 UTC, dom wrote: from what i got Classes are always reference types and structs are value types in D. i am wondering why that is. for me the main difference between classes and structs is not how they are allocated, but that one has inhertiance, and the other hasn't. Supporting inheritance has some overhead, so at least the split between classes and structs makes sense to me. How instances in an inheritance tree are allocated is actually an important consideration, particularly when it comes to object slicing. In C++, this can be a problem: ``` class Foo {}; class Bar : public Foo {}; Bar bar; Foo foo = bar; ``` All of the information about the type Bar is lost in the assignment to foo. The same thing is going to happen when passing bar to a function that takes a Foo by value as a parameter. The only way to avoid the problem is to pass by reference (or pointer). In Modern C++, with move semantics being a thing, passing by value is much more common than it used to be, but this is the sort of thing it's easy either not to know or to forget about. In D, you don't have to worry about it. I read somewhere (in old forum discussions or an old article) that object slicing is one of the motivations behind the distinction in D.
Re: Classes and Structs, Memory management questions
On Friday, 2 September 2016 at 08:59:38 UTC, Andrea Fontana wrote: On Friday, 2 September 2016 at 08:54:33 UTC, dom wrote: i haven't read it fully yet, but i think this DIP contains some or all of my concerns https://github.com/dlang/DIPs/blob/master/DIPs/DIP1000.md Check this: https://dlang.org/phobos/std_experimental_allocator.html thx that is very interesting. it seems like that can cover very complex allocation schemes with a general interface! i have also found this to allocate a class on the stack http://dlang.org/phobos/std_typecons.html#.scoped class A { ... } auto instance = scoped!A(); that has even quite a nice syntax!
Re: Classes and Structs, Memory management questions
On Friday, 2 September 2016 at 08:54:33 UTC, dom wrote: i haven't read it fully yet, but i think this DIP contains some or all of my concerns https://github.com/dlang/DIPs/blob/master/DIPs/DIP1000.md Check this: https://dlang.org/phobos/std_experimental_allocator.html
Re: Classes and Structs, Memory management questions
i haven't read it fully yet, but i think this DIP contains some or all of my concerns https://github.com/dlang/DIPs/blob/master/DIPs/DIP1000.md
Classes and Structs, Memory management questions
from what i got Classes are always reference types and structs are value types in D. i am wondering why that is. for me the main difference between classes and structs is not how they are allocated, but that one has inhertiance, and the other hasn't. Supporting inheritance has some overhead, so at least the split between classes and structs makes sense to me. I like garbage collection, but in many cases it's just unnecessary (like in the example below) or hurts performance on a bigger scale. { FileReader reader = new // Annoy the garbage collector for no reason? auto blob = reader.read(); delete reader; } Since garbage collection is a very nice feature that I wouldn't wanna miss for certain scenarios I think D should give us the opportunity to determine how an object is allocated. In the example above putting it on the stack is probably a good idea. Having a self managed reference to the heap can be good too if manual memory management is wanted. Or of course let the GC manage it ( i love it for prototyping code and also as a D beginner it is beneficial that i just dont need to care about memory management). Could somebody explain to me if this is seen as a problem why/whynot and how I should address that kind of issues in my code?
Re: Preferred method of creating objects, structs, and arrays with deterministic memory management
On Wednesday, 1 June 2016 at 08:53:01 UTC, Rene Zwanenburg wrote: I was wondering: what's the preferred method for deterministic memory management? You can annotate your functions as @nogc. The compiler will disallow any potential GC use, including calling other functions that are not @nogc. P0nce is doing real time audio stuff, iirc he's using a thread with a @nogc entry point for the strictly real-time parts, and regular GC-using code in another thread for everything else. Yes, GC is a minor concern for me. I'm happy to use the GC in UI stuff next to the @nogc part. As tired as the expression is, it's a "best of both world" situation. I tried GC.disable(), it was bringing lots of stability problems because not all my code was @nogc and rogue allocations occured. I guess I deserved it? I'm a heavy user of the GC-proof resource class "idiom" which turn your turn the GC into a leak detector: https://p0nce.github.io/d-idioms/#GC-proof-resource-class That way you can make polymorphic resources with relative safety. Exception safety is harder though for class objects. Usually struct should be used. I'd say D has a more complex story for resources, but it's not necessary a bad story. When writing a writeln("hello world"); I don't really want to think about who owns the string "hello world". That's what affine types are about, one size fits all.
Re: Preferred method of creating objects, structs, and arrays with deterministic memory management
I was wondering: what's the preferred method for deterministic memory management? You may be interested in RefCounted. It only works for structs, not classes, but it's still useful. - Classes/Structs have constructors and destructors. I am unconfident with my knowledge as to how this works with malloc and free. malloc() and free() operate on a lower level, they only care about raw memory. When you malloc() some space, you can construct an object there using emplace(), and it can be destructed by using detroy(). - Many features and types use the GC such as exceptions, the new keyword, and all arrays except statics. It's important to differentiate static arrays, and slices with static storage. For example: class C { static int[] someSlice; // This is a slice with static storage. The memory it is referring to may be GC allocated, but it doesn't have to (this is true for all slices btw). int[4] someStaticArray; // This is a static array, i.e. an array with a fixed length. In D static arrays are value types, so it's allocated in its enclosing scope and copied when you pass it around. } - std.container.array can be used for deterministic arrays, but has the issue of dangling pointers. There is probably a good solution to this, but I don't know it. I don't know of any dangling pointer issues with Array. It's possible to create cycles resulting in a leak, but that's only likely to happen if you heavily rely on refcounting everything. - There seems to be no way to completely turn off the GC. That is, never have the runtime allocate the memory used by the GC. Like Rikki said, if you really want to the GC can be replaced with an asserting stub. This isn't as hard as it sounds, just add something like the file he linked to to your project. Since all declarations are extern(C) there is no name mangling, and the linker will prefer your own definitions over the ones in druntime. I don't recommend you do this though unless you really know what you're doing. This are the pieces I've gathered, but I don't really know what's true or how to use this knowledge. Some ideas I've gleaned may also be outdated. Does anyone know the "correct" way one would go about a non-GC program, and if a program can be ran without ever instantiating a GC? Has there been any progress on reducing the std library's usage of the GC? You can annotate your functions as @nogc. The compiler will disallow any potential GC use, including calling other functions that are not @nogc. P0nce is doing real time audio stuff, iirc he's using a thread with a @nogc entry point for the strictly real-time parts, and regular GC-using code in another thread for everything else.
Re: Preferred method of creating objects, structs, and arrays with deterministic memory management
On Wednesday, 1 June 2016 at 07:59:50 UTC, Anthony Monterrosa wrote: I've recently been trying to convince a friend of mine that D has at least the functionality of C++, and have been learning the language over C++ for a few months. Memory management is pretty important to him, and a subject I'm honestly curious about as well. I was wondering: what's the preferred method for deterministic memory management? [...] You should watch this year's dconf keynote. The plan is to make the GC usage opt-in. There is also the company weka that does high performance storage and turned off the gc entirely and allocates all exception on startup.vstartup. Their CTO gave a really nice talk too. http://dconf.org/2016/talks/zvibel.html If you want to convince your friend, tell him that using the GC is more productive and he still can do manual memory management or use RefCounted if really needed for the performance. That being said also reference counting and other techniques come at a cost. Btw The videos aren't up yet, but the live stream is archived.
Re: Preferred method of creating objects, structs, and arrays with deterministic memory management
On 01/06/2016 7:59 PM, Anthony Monterrosa wrote: I've recently been trying to convince a friend of mine that D has at least the functionality of C++, and have been learning the language over C++ for a few months. Memory management is pretty important to him, and a subject I'm honestly curious about as well. I was wondering: what's the preferred method for deterministic memory management? I don't know much about this topic relative to most (Java background), but this is what I've found: - Classes/Structs have constructors and destructors. I am unconfident with my knowledge as to how this works with malloc and free. They get called once memory has been allocated and before it is deallocated. - core.memory can be used to call malloc and its buddies. It allows determined management, but will still use the GC if something isn't collected (presumably for leaks). No. core.memory provides an interface to the GC, not malloc and friends. - Many features and types use the GC such as exceptions, the new keyword, and all arrays except statics. Not all arrays. Slices are not required to use the GC, just don't go appending to it. - Allocators in "std.experimental.allocators" still use the GC, but give more control for class/struct objects. GCAllocator uses the GC yes. It is the default one for theAllocator() and processAllocator(). However there are others such as Mallocator which do not use the GC but directly call malloc and friends. - std.container.array can be used for deterministic arrays, but has the issue of dangling pointers. There is probably a good solution to this, but I don't know it. - There seems to be no way to completely turn off the GC. That is, never have the runtime allocate the memory used by the GC. Oh but you can! https://github.com/dlang/druntime/blob/master/src/gcstub/gc.d Now imagine replacing all those useful little function implementations with assert(0); Then you can never use the GC! - Andreau had a discussion in 2009 about how this issue might be handled, but there didn't seem to be an obvious consensus (http://forum.dlang.org/thread/hafpjn$1cu8$1...@digitalmars.com?page=1) This are the pieces I've gathered, but I don't really know what's true or how to use this knowledge. Some ideas I've gleaned may also be outdated. Does anyone know the "correct" way one would go about a non-GC program, and if a program can be ran without ever instantiating a GC? Has there been any progress on reducing the std library's usage of the GC? People view the GC as a bad thing. Most of the time it is only ever a good thing. You're not writing a AAA game here and even then you might be able to get away with it by just being a little bit careful and disabling it. If you have some code that needs to be speed up? Sure preallocate memory will always give you speed boosts. But the GC generally won't cause that many problems especially if its disabled for the hot places.
Preferred method of creating objects, structs, and arrays with deterministic memory management
I've recently been trying to convince a friend of mine that D has at least the functionality of C++, and have been learning the language over C++ for a few months. Memory management is pretty important to him, and a subject I'm honestly curious about as well. I was wondering: what's the preferred method for deterministic memory management? I don't know much about this topic relative to most (Java background), but this is what I've found: - Classes/Structs have constructors and destructors. I am unconfident with my knowledge as to how this works with malloc and free. - core.memory can be used to call malloc and its buddies. It allows determined management, but will still use the GC if something isn't collected (presumably for leaks). - Many features and types use the GC such as exceptions, the new keyword, and all arrays except statics. - Allocators in "std.experimental.allocators" still use the GC, but give more control for class/struct objects. - std.container.array can be used for deterministic arrays, but has the issue of dangling pointers. There is probably a good solution to this, but I don't know it. - There seems to be no way to completely turn off the GC. That is, never have the runtime allocate the memory used by the GC. - Andreau had a discussion in 2009 about how this issue might be handled, but there didn't seem to be an obvious consensus (http://forum.dlang.org/thread/hafpjn$1cu8$1...@digitalmars.com?page=1) This are the pieces I've gathered, but I don't really know what's true or how to use this knowledge. Some ideas I've gleaned may also be outdated. Does anyone know the "correct" way one would go about a non-GC program, and if a program can be ran without ever instantiating a GC? Has there been any progress on reducing the std library's usage of the GC?
Re: Example of code with manual memory management
https://dlang.org/phobos/std_container.html and corresponding code in phobos. Though recently allocators were introduced and containers are going to be written with support for allocators.
Example of code with manual memory management
Hi, I am looking for example of types where memory management is manual, and the type supports operator overloading, etc. Grateful if someone could point me to sample example code. Thanks and Regards Dibyendu
Re: D Beginner Trying Manual Memory Management
I don't think you've read h5py source in enough detail :) You're right - I haven't done more than browsed it. It's based HEAVILY on duck typing. There is a question here about what to do in D. On the one hand, the flexibility of being able to open a foreign HDF5 file where you don't know beforehand the dataset type is very nice. On the other, the adaptations needed to handle this flexibly get in the way when you are dealing with your own data that has a set format and where recompilation is acceptable if it changes. Looking at the 'ease' of processing JSON, even using vibed, I think that one will need to implement both eventually, but perhaps starting with static typing. In addition, it has way MORE classes than the C++ hierarchy does. E.g., the high-level File object actually has these parents: File : Group, Group : HLObject, MutableMappingWithLock, HLObject : CommonStateObject and internally the File also keeps a reference to file id which is an instance of FileID which inherits from GroupID which inherits from ObjectID, do I need to continue? Okay - I guess there is a distinction between the interface to the outside world (where I think the h5py etc way is superior for most uses) and the implementation. Is not the reason h5py has lots of classes primarily because that is how you write good code in python, whereas in many cases this is not true in D (not that you should ban classes, but often structs + free floating functions are more suitable). PyTables, on the contrary is quite badly written (although it works quite well and there are brilliant folks on the dev team like francesc alted) and looks like a dump of C code interweaved with hackish Python code. Interesting. What do you think is low quality about the design? In h5py you can do things like file[/dataset].write(...) -- this just wouldn't work as is in a strictly typed language since the indexing operator generally returns you something of a Location type (or an interface, rather) which can be a group/datatype/dataset which is only known at runtime. Well, if you don't mind recompiling your code when the data set type changes (or you encounter a new data set) then you can do that (which is what I posted a link to earlier). It depends on your use case. It's hard to think of an application more dynamic than web sites, and yet people seem happy enough with vibed's use of compiled diet templates as the primary implementation. They would like the option of dynamic ones too, and I think this would be useful in this domain too, since one does look at foreign data on occasion. One could of course use the quick compilation of D to regenerate parts of the code when this happens. Whether or not this is acceptable depends on your use case - for some it might be okay, but obviously it is no good if you are writing a generic H5 browser/charting tool. So I think if you don't allow static dataset typing it means the flexibility of dynamic typing gets in the way for some uses (which might be most of them), but you need to add dynamic typing too. Shall we move this to a different thread and/or email, as I am afraid I have hijacked the poor original poster's request. On the refcounting question, I confess that I do not fully understand your concern, which may well reflect a lack of deep experience with D on my part. Adam Ruppe suggests that it's generally okay to rely on a struct destructor to call C cleanup code. I can appreciate this may not be true with h5 and, if you can spare the time, I would love to understand more precisely why not. Out of all of them, only the dataset supports the write method but you don't know it's going to be a dataset. See the problem? In this case I didn't quite follow. Where does this fall down ? void h5write(T)(Dataset x, T data) I have your email somewhere and will drop you a line. Or you can email me laeeth at laeeth.com. And let's create a new thread. Laeeth.
Re: D Beginner Trying Manual Memory Management
In the hierarchy example above (c++ hdf hierarchy link), by using UFCS to implement the shared methods (which are achieved by multiple inheritance in the c++ counterpart) did you mean something like this? // id.d struct ID { int id; ... } // location.d struct Location { ID _id; alias _id this; ... } // file.d public import commonfg; // ugh struct File { Location _location; alias _location this; ... } // group.d public import commonfg; struct File { Location _location; alias _location this; ... } // commonfg.d { ... } enum isContainer(T) = is(T: File) || is(T : Group); auto method1(T)(T obj, args) if (isContainer!T) { ... } auto method2(T)(T obj, args) if (isContainer!T) { ... } I guess two of my gripes with UFCS is (a) you really have to // another hdf-specific thing here but a good example in general is that some functions return you an id for an object which is one of the location subtypes (e.g. it could be a File or could be a Group depending on run-time conditions), so it kind of feels natural to use polymorphism and classes for that, but what would you do with the struct approach? The only thing that comes to mind is Variant, but it's quite meh to use in practice. Void unlink(File f){} Void unlink(Group g){} For simple cases maybe one can keep it simple, and despite the Byzantine interface what one is trying to do when using HDF5 is not intrinsically so complex.
Re: D Beginner Trying Manual Memory Management
struct File { Location _location; alias _location this; ... } // group.d public import commonfg; struct File { Location _location; alias _location this; ... } // commonfg.d { ... } enum isContainer(T) = is(T: File) || is(T : Group); auto method1(T)(T obj, args) if (isContainer!T) { ... } auto method2(T)(T obj, args) if (isContainer!T) { ... } I guess two of my gripes with UFCS is (a) you really have to // another hdf-specific thing here but a good example in general is that some functions return you an id for an object which is one of the location subtypes (e.g. it could be a File or could be a Group depending on run-time conditions), so it kind of feels natural to use polymorphism and classes for that, but what would you do with the struct approach? The only thing that comes to mind is Variant, but it's quite meh to use in practice. Void unlink(File f){} Void unlink(Group g){} For simple cases maybe one can keep it simple, and despite the Byzantine interface what one is trying to do when using HDF5 is not intrinsically so complex. So your solution is copying and pasting the code? But now repeat that for 200 other functions and a dozen more types that can be polymorphic in weirdest ways possible... If you are simply have a few lines calling the API and the validation is different enough for file and group (I haven't written unlink yet) then why not (and move proper shared code out into helper functions). The alternative is a long method with lots of conditions, which may be the best in some cases but may be harder to follow. I do like the h5py and pytables approaches. One doesn't need to bother too much with the implementation when using their library. However, what I am doing is quite simple from a data perspective - a decent amount of it, but it is not an interesting problem from a theoretical perspective - just execution. Now if you are higher octane as a user you may be able to see what I cannot. But on the other hand, the Pareto principle applies, and in my view a library should make it simple to do simple things. One can't get there if the primary interface is a direct mapping of the HDF5 hierarchy, and I also think that is unnecessary with D. But I very much appreciate your work as the final result is better for everyone that way, and you are evidently a much longer running user of D than me. I never used C++ as it just seemed too ugly! and I suspect the difference in backgrounds is shaping perspectives. What do you think the trickiest parts are with HDF5? (You mention weird polymorphism). Laeeth
Re: D Beginner Trying Manual Memory Management
On Wednesday, 14 January 2015 at 14:54:09 UTC, Laeeth Isharc wrote: In the hierarchy example above (c++ hdf hierarchy link), by using UFCS to implement the shared methods (which are achieved by multiple inheritance in the c++ counterpart) did you mean something like this? // id.d struct ID { int id; ... } // location.d struct Location { ID _id; alias _id this; ... } // file.d public import commonfg; // ugh struct File { Location _location; alias _location this; ... } // group.d public import commonfg; struct File { Location _location; alias _location this; ... } // commonfg.d { ... } enum isContainer(T) = is(T: File) || is(T : Group); auto method1(T)(T obj, args) if (isContainer!T) { ... } auto method2(T)(T obj, args) if (isContainer!T) { ... } I guess two of my gripes with UFCS is (a) you really have to // another hdf-specific thing here but a good example in general is that some functions return you an id for an object which is one of the location subtypes (e.g. it could be a File or could be a Group depending on run-time conditions), so it kind of feels natural to use polymorphism and classes for that, but what would you do with the struct approach? The only thing that comes to mind is Variant, but it's quite meh to use in practice. Void unlink(File f){} Void unlink(Group g){} For simple cases maybe one can keep it simple, and despite the Byzantine interface what one is trying to do when using HDF5 is not intrinsically so complex. So your solution is copying and pasting the code? But now repeat that for 200 other functions and a dozen more types that can be polymorphic in weirdest ways possible...
Re: D Beginner Trying Manual Memory Management
On Wednesday, 14 January 2015 at 16:27:17 UTC, Laeeth Isharc wrote: struct File { Location _location; alias _location this; ... } // group.d public import commonfg; struct File { Location _location; alias _location this; ... } // commonfg.d { ... } enum isContainer(T) = is(T: File) || is(T : Group); auto method1(T)(T obj, args) if (isContainer!T) { ... } auto method2(T)(T obj, args) if (isContainer!T) { ... } I guess two of my gripes with UFCS is (a) you really have to // another hdf-specific thing here but a good example in general is that some functions return you an id for an object which is one of the location subtypes (e.g. it could be a File or could be a Group depending on run-time conditions), so it kind of feels natural to use polymorphism and classes for that, but what would you do with the struct approach? The only thing that comes to mind is Variant, but it's quite meh to use in practice. Void unlink(File f){} Void unlink(Group g){} For simple cases maybe one can keep it simple, and despite the Byzantine interface what one is trying to do when using HDF5 is not intrinsically so complex. So your solution is copying and pasting the code? But now repeat that for 200 other functions and a dozen more types that can be polymorphic in weirdest ways possible... If you are simply have a few lines calling the API and the validation is different enough for file and group (I haven't written unlink yet) then why not (and move proper shared code out into helper functions). The alternative is a long method with lots of conditions, which may be the best in some cases but may be harder to follow. I do like the h5py and pytables approaches. One doesn't need to bother too much with the implementation when using their library. However, what I am doing is quite simple from a data perspective - a decent amount of it, but it is not an interesting problem from a theoretical perspective - just execution. Now if you are higher octane as a user you may be able to see what I cannot. But on the other hand, the Pareto principle applies, and in my view a library should make it simple to do simple things. One can't get there if the primary interface is a direct mapping of the HDF5 hierarchy, and I also think that is unnecessary with D. But I very much appreciate your work as the final result is better for everyone that way, and you are evidently a much longer running user of D than me. I never used C++ as it just seemed too ugly! and I suspect the difference in backgrounds is shaping perspectives. What do you think the trickiest parts are with HDF5? (You mention weird polymorphism). Laeeth I don't think you've read h5py source in enough detail :) It's based HEAVILY on duck typing. In addition, it has way MORE classes than the C++ hierarchy does. E.g., the high-level File object actually has these parents: File : Group, Group : HLObject, MutableMappingWithLock, HLObject : CommonStateObject and internally the File also keeps a reference to file id which is an instance of FileID which inherits from GroupID which inherits from ObjectID, do I need to continue? :) PyTables, on the contrary is quite badly written (although it works quite well and there are brilliant folks on the dev team like francesc alted) and looks like a dump of C code interweaved with hackish Python code. In h5py you can do things like file[/dataset].write(...) -- this just wouldn't work as is in a strictly typed language since the indexing operator generally returns you something of a Location type (or an interface, rather) which can be a group/datatype/dataset which is only known at runtime. Out of all of them, only the dataset supports the write method but you don't know it's going to be a dataset. See the problem? I don't want the user code to deal with any of the HDF5 C API and/or have a bunch of if conditions or explicit casts which is outright ugly. Ideally, it would work kind of like H5PY, abstracting the user away from refcounting, error code checking after each operation, object type checking and all that stuff.
Re: D Beginner Trying Manual Memory Management
I see, thanks! :) I've started liking structs more and more recently as well and been pondering on how to convert a class-based code that looks like this (only the base class has any data): it's hard to tell by brief description. but having multiple inheritance immediately rings an alarm ring for me. something is very-very-very wrong if you need to have a winged whale. ;-) A real-world example: http://www.hdfgroup.org/HDF5/doc/cpplus_RM/hierarchy.html H5::File is both an H5::Location and H5::CommonFG (but not an H5::Object) H5::Group is both an H5::Object (subclass of H5::Location) and H5::CommonFG H5::Dataset is an H5::Object i see something named CommonFG here, which seems to good thing to move out of hierarchy altogether. bwah, i don't even sure that given hierarchy is good for D. C++ has no UFCS, and it's incredibly hard to check if some entity has some methods/properties in C++, so they have no other choice than to work around that limitations. it may be worthful to redesign the whole thing for D, exploiting D shiny UFCS and metaprogramming features. and, maybe, moving some things to interfaces too. I just finished reading aldanor's blog, so I know he is slightly allergic to naked functions and prefers classes ;) With Ketmar, I very much agree (predominantly as a user of HDF5 and less so as an inexperienced D programmr writing a wrapper for it). It's a pain to figure out just how to do simple things until you know the H5 library. You have to create an object for file permissions before you even get started, then similarly for the data series (datasets) within, another for the dimensions of the array, etc etc - that doesn't fit with the intrinsic nature of the domain. There is a more general question of bindings/wrappers - preserve the original structure and naming so existing code can be ported, or write a wrapper that makes it easy for the user to accomplish his objectives. It seems like for the bindings preserving the library structure is fine, but for the wrapper one might as well make things easy. Eg here https://gist.github.com/Laeeth/9637233db41a11a9d1f4 line 146. (sorry for duplication and messiness of code, which I don't claim to be perfectly written - I wanted to try something quickly and have not yet tidied up). So rather than navigate the Byzantine hierarchy, one can just do something like this (which will take a struct of PriceBar - date,open,high,low,close - and put it in your desired dataset and file, appending or overwriting as you prefer). dumpDataSpaceVector!PriceBar(file,ticker,array(priceBars[ticker]),DumpMode.truncate); which is closer to h5py in Python. (It uses reflection to figure out the contents of a non-nested struct, but won't yet cope with arrays and nested structs inside). And of course a full wrapper might be a bit more complicated, but I truly think one can do better than mapping the HDF5 hierarchy one for one. Laeeth.
Re: D Beginner Trying Manual Memory Management
On Tue, 13 Jan 2015 17:08:37 + Laeeth Isharc via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote: I just finished reading aldanor's blog, so I know he is slightly allergic to naked functions and prefers classes ;) that's due to absense of modules in C/C++. and namespaces aren't of big help here too. and, of course, due to missing UFCS, that prevents nice `obj.func()` for free functions. ;-) signature.asc Description: PGP signature
Re: D Beginner Trying Manual Memory Management
On Tue, 13 Jan 2015 18:35:15 + aldanor via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote: I guess two of my gripes with UFCS is (a) you really have to use public imports in the modules where the target types are defined so you bring all the symbols in whether you want it or not (b) you lose access to private members because it's not the same module anymore (correct me if I'm wrong?). Plus, you need to decorate every single free function with a template constraint. you can make a package and set protection to `package` instead of `private`, so your function will still be able to access internal fields, but package users will not. this feature is often missed by the people who are used to `public`/`protected`/`private` triad. signature.asc Description: PGP signature
Re: D Beginner Trying Manual Memory Management
On Tuesday, 13 January 2015 at 17:08:38 UTC, Laeeth Isharc wrote: I see, thanks! :) I've started liking structs more and more recently as well and been pondering on how to convert a class-based code that looks like this (only the base class has any data): it's hard to tell by brief description. but having multiple inheritance immediately rings an alarm ring for me. something is very-very-very wrong if you need to have a winged whale. ;-) A real-world example: http://www.hdfgroup.org/HDF5/doc/cpplus_RM/hierarchy.html H5::File is both an H5::Location and H5::CommonFG (but not an H5::Object) H5::Group is both an H5::Object (subclass of H5::Location) and H5::CommonFG H5::Dataset is an H5::Object i see something named CommonFG here, which seems to good thing to move out of hierarchy altogether. bwah, i don't even sure that given hierarchy is good for D. C++ has no UFCS, and it's incredibly hard to check if some entity has some methods/properties in C++, so they have no other choice than to work around that limitations. it may be worthful to redesign the whole thing for D, exploiting D shiny UFCS and metaprogramming features. and, maybe, moving some things to interfaces too. I just finished reading aldanor's blog, so I know he is slightly allergic to naked functions and prefers classes ;) With Ketmar, I very much agree (predominantly as a user of HDF5 and less so as an inexperienced D programmr writing a wrapper for it). It's a pain to figure out just how to do simple things until you know the H5 library. You have to create an object for file permissions before you even get started, then similarly for the data series (datasets) within, another for the dimensions of the array, etc etc - that doesn't fit with the intrinsic nature of the domain. There is a more general question of bindings/wrappers - preserve the original structure and naming so existing code can be ported, or write a wrapper that makes it easy for the user to accomplish his objectives. It seems like for the bindings preserving the library structure is fine, but for the wrapper one might as well make things easy. Eg here https://gist.github.com/Laeeth/9637233db41a11a9d1f4 line 146. (sorry for duplication and messiness of code, which I don't claim to be perfectly written - I wanted to try something quickly and have not yet tidied up). So rather than navigate the Byzantine hierarchy, one can just do something like this (which will take a struct of PriceBar - date,open,high,low,close - and put it in your desired dataset and file, appending or overwriting as you prefer). dumpDataSpaceVector!PriceBar(file,ticker,array(priceBars[ticker]),DumpMode.truncate); which is closer to h5py in Python. (It uses reflection to figure out the contents of a non-nested struct, but won't yet cope with arrays and nested structs inside). And of course a full wrapper might be a bit more complicated, but I truly think one can do better than mapping the HDF5 hierarchy one for one. Laeeth. In the hierarchy example above (c++ hdf hierarchy link), by using UFCS to implement the shared methods (which are achieved by multiple inheritance in the c++ counterpart) did you mean something like this? // id.d struct ID { int id; ... } // location.d struct Location { ID _id; alias _id this; ... } // file.d public import commonfg; // ugh struct File { Location _location; alias _location this; ... } // group.d public import commonfg; struct File { Location _location; alias _location this; ... } // commonfg.d { ... } enum isContainer(T) = is(T: File) || is(T : Group); auto method1(T)(T obj, args) if (isContainer!T) { ... } auto method2(T)(T obj, args) if (isContainer!T) { ... } I guess two of my gripes with UFCS is (a) you really have to use public imports in the modules where the target types are defined so you bring all the symbols in whether you want it or not (b) you lose access to private members because it's not the same module anymore (correct me if I'm wrong?). Plus, you need to decorate every single free function with a template constraint. // another hdf-specific thing here but a good example in general is that some functions return you an id for an object which is one of the location subtypes (e.g. it could be a File or could be a Group depending on run-time conditions), so it kind of feels natural to use polymorphism and classes for that, but what would you do with the struct approach? The only thing that comes to mind is Variant, but it's quite meh to use in practice.
Re: D Beginner Trying Manual Memory Management
On Mon, 12 Jan 2015 22:07:13 + aldanor via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote: I see, thanks! :) I've started liking structs more and more recently as well and been pondering on how to convert a class-based code that looks like this (only the base class has any data): it's hard to tell by brief description. but having multiple inheritance immediately rings an alarm ring for me. something is very-very-very wrong if you need to have a winged whale. ;-) signature.asc Description: PGP signature
Re: D Beginner Trying Manual Memory Management
On Mon, 12 Jan 2015 23:06:16 + jmh530 via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote: I had seen some stuff on alias thing, but I hadn't bothered to try to understand it until now. If I'm understanding the first example a href=http://dlang.org/class.html#AliasThis;here/a, alias this let's you refer to x in s by writing either s.x (as normally) or just s. That didn't seem that interesting, but then I found a href=http://3d.benjamin-thaut.de/?p=90;example/a where they alias this'ed a struct method. That's pretty interesting. there is nice page by p0nce and it shows nice and simple `alias this` trick usage: http://p0nce.github.io/d-idioms/#Extending-a-struct-with-alias-this and i have stream.d module in my IV package ( http://repo.or.cz/w/iv.d.git/tree ), which works with i/o streams by testing if passed struct/class has necessary methods (`isReadable!`, `isWriteable!`, `isSeekable!`, etc. signature.asc Description: PGP signature
Re: D Beginner Trying Manual Memory Management
On Mon, 12 Jan 2015 22:07:13 + aldanor via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote: I see, thanks! :) I've started liking structs more and more recently as well and been pondering on how to convert a class-based code that looks like this (only the base class has any data): p.s. can't you convert most of that to free functions? thanks to UFCS you'll be able to use them with `obj.func` notation. and by either defining `package` protection for class fields, or simply writing that functions in the same module they will have access to internal object stuff. and if you can return `obj` from each function, you can go with templates and chaining. ;-) signature.asc Description: PGP signature
Re: D Beginner Trying Manual Memory Management
On Tuesday, 13 January 2015 at 08:33:57 UTC, ketmar via Digitalmars-d-learn wrote: On Mon, 12 Jan 2015 22:07:13 + aldanor via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote: I see, thanks! :) I've started liking structs more and more recently as well and been pondering on how to convert a class-based code that looks like this (only the base class has any data): it's hard to tell by brief description. but having multiple inheritance immediately rings an alarm ring for me. something is very-very-very wrong if you need to have a winged whale. ;-) A real-world example: http://www.hdfgroup.org/HDF5/doc/cpplus_RM/hierarchy.html H5::File is both an H5::Location and H5::CommonFG (but not an H5::Object) H5::Group is both an H5::Object (subclass of H5::Location) and H5::CommonFG H5::Dataset is an H5::Object
Re: D Beginner Trying Manual Memory Management
On Tue, 13 Jan 2015 16:08:15 + aldanor via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote: On Tuesday, 13 January 2015 at 08:33:57 UTC, ketmar via Digitalmars-d-learn wrote: On Mon, 12 Jan 2015 22:07:13 + aldanor via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote: I see, thanks! :) I've started liking structs more and more recently as well and been pondering on how to convert a class-based code that looks like this (only the base class has any data): it's hard to tell by brief description. but having multiple inheritance immediately rings an alarm ring for me. something is very-very-very wrong if you need to have a winged whale. ;-) A real-world example: http://www.hdfgroup.org/HDF5/doc/cpplus_RM/hierarchy.html H5::File is both an H5::Location and H5::CommonFG (but not an H5::Object) H5::Group is both an H5::Object (subclass of H5::Location) and H5::CommonFG H5::Dataset is an H5::Object i see something named CommonFG here, which seems to good thing to move out of hierarchy altogether. bwah, i don't even sure that given hierarchy is good for D. C++ has no UFCS, and it's incredibly hard to check if some entity has some methods/properties in C++, so they have no other choice than to work around that limitations. it may be worthful to redesign the whole thing for D, exploiting D shiny UFCS and metaprogramming features. and, maybe, moving some things to interfaces too. signature.asc Description: PGP signature
D Beginner Trying Manual Memory Management
I'm new to D. I have some modest knowledge of C++, but am more familiar with scripting languages (Matlab, Python, R). D seems so much easier than C++ in a lot of ways (and I just learned about rdmd today, which is pretty cool). I am concerned about performance of D vs. C++, so I wanted to learn a little bit more about manual memory management, in case I might ever need it (not for any particular application). The D Language book Section 6.3.4-5 covers the topic. I basically copied below and made some small changes. import core.stdc.stdlib; import std.stdio; class Buffer { private void* data; // Constructor this() { data = malloc(1024); } // Destructor ~this() { free(data); } } unittest { auto b = new Buffer; auto b1 = b; destroy(b); //changed from clear in book example assert(b1.data is null); writeln(Unit Test Finished); } I was thinking that it might be cool to use scope(exit) to handle the memory management. It turns out the below unit test works. unittest { auto b = new Buffer; scope(exit) destroy(b); writeln(Unittest Finished); } However, if you leave in the auto b1 and assert, then it fails. I suspect this is for the same reason that shared pointers are a thing in C++ (it can't handle copies of the pointer). Alternately, you can use some other scope and something like this unittest { { Buffer b = new Buffer; scope(exit) destroy(b); } destroy(b); writeln(Unittest Finished); } will fail because b has already been destroyed. I thought this behavior was pretty cool. If you followed this approach, then you wouldn't have to wait until the end of the program to delete the pointers. The downside would be if you need to write a lot of pointers and there would be a lot of nesting. (likely the motivation for the reference counting approach to smart pointers). I wasn't sure how to figure out a way to combine these two components so that I only have to write one line. I thought one approach might be to put a scope(exit) within the constructor, but that doesn't work. Also, if I try to do it within a template function, then the scope(exit) is limited to the function scope, which isn't the same thing. Outside of writing a unique_ptr template class (which I have tried to do, but couldn't get it to work) I don't really know how to combine them into one line. Any ideas?
Re: D Beginner Trying Manual Memory Management
On Mon, 12 Jan 2015 19:29:53 + jmh530 via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote: the proper answer is too long to write (it will be more an article that a forum answer ;-), so i'll just give you some directions: import std.typecons; { auto b = scoped!B(); // `auto` is important here! ... } `scoped!` allocating class instance *on* *stack*, and automatically calls destructor when object goes out of scope. but you'd better consider using struct for such things, as struct are stack-allocated by default (unlike classes, which are reference type and should be allocated manually). there is a big difference between `class` and `struct` in D, much bigger that in C++ (where it's only about default protection, actually). signature.asc Description: PGP signature
Re: D Beginner Trying Manual Memory Management
Thanks for the reply, I wasn't familiar with scoped. I was aware that structs are on the stack and classes are on the heap in D, but I didn't know it was possible to put a class on the stack. Might be interesting to see how this is implemented. After looking up some more C++, I think what I was trying to do is more like make_unique than unique_ptr. On Monday, 12 January 2015 at 19:42:14 UTC, ketmar via Digitalmars-d-learn wrote: On Mon, 12 Jan 2015 19:29:53 + jmh530 via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote: the proper answer is too long to write (it will be more an article that a forum answer ;-), so i'll just give you some directions: import std.typecons; { auto b = scoped!B(); // `auto` is important here! ... } `scoped!` allocating class instance *on* *stack*, and automatically calls destructor when object goes out of scope. but you'd better consider using struct for such things, as struct are stack-allocated by default (unlike classes, which are reference type and should be allocated manually). there is a big difference between `class` and `struct` in D, much bigger that in C++ (where it's only about default protection, actually).
Re: D Beginner Trying Manual Memory Management
On Mon, 12 Jan 2015 20:14:19 + jmh530 via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote: Thanks for the reply, I wasn't familiar with scoped. I was aware that structs are on the stack and classes are on the heap in D, but I didn't know it was possible to put a class on the stack. Might be interesting to see how this is implemented. actually, there is nothing complicated there (if you don't want to write an universal thing like `emplace!` ;-). it builds a wrapper struct big enough to hold class instance, copies class `.init` there and calls class' constructor. the rest of the magic is done by the compiler: when struct goes out of scope, compiler calls struct destructor, which in turn calls class destructor. ah, and it forwards all other requests with `alias this` trick. After looking up some more C++, I think what I was trying to do is more like make_unique than unique_ptr. i don't remember C++ well, but nevertheless i encouraging you to take a look at `std.typecons`. there are some handy things there, like `Rebindable!` or `Nullable!`. and some funny things like `BlackHole!` and `WhiteHole!`. ;-) it even has `RefCounted!`, but it doesn't play well with classes yet (AFAIR). signature.asc Description: PGP signature
Re: D Beginner Trying Manual Memory Management
On Monday, 12 January 2015 at 20:30:45 UTC, ketmar via Digitalmars-d-learn wrote: it even has `RefCounted!`, but it doesn't play well with classes yet (AFAIR). I wonder if it's possible to somehow make a version of refcounted that would work with classes (even if limited/restricted in some certain ways), or is it just technically impossible because of reference semantics?
Re: D Beginner Trying Manual Memory Management
On Mon, 12 Jan 2015 21:37:27 + aldanor via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote: On Monday, 12 January 2015 at 20:30:45 UTC, ketmar via Digitalmars-d-learn wrote: it even has `RefCounted!`, but it doesn't play well with classes yet (AFAIR). I wonder if it's possible to somehow make a version of refcounted that would work with classes (even if limited/restricted in some certain ways), or is it just technically impossible because of reference semantics? it's hard. especially hard when you considering inheritance (which is not playing well with templates) and yes, ref semantics. on the other side i found myself rarely using classes at all. i mostly writing templates that checks if a passed thing has all the neccessary methods and properties in place and just using that. with D metaprogramming abilities (and `alias this` trick ;-) inheritance becomes not so important. and so classes too. sometimes i'm using structs with delegate fields to simulate some sort of virtual methods 'cause i tend to constantly forgetting about that `class` thingy. ;-) OOP is overrated. at least c++-like (should i say simula-like?) OOP. ;-) signature.asc Description: PGP signature
Re: D Beginner Trying Manual Memory Management
On Monday, 12 January 2015 at 21:54:51 UTC, ketmar via Digitalmars-d-learn wrote: On Mon, 12 Jan 2015 21:37:27 + aldanor via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote: On Monday, 12 January 2015 at 20:30:45 UTC, ketmar via Digitalmars-d-learn wrote: it even has `RefCounted!`, but it doesn't play well with classes yet (AFAIR). I wonder if it's possible to somehow make a version of refcounted that would work with classes (even if limited/restricted in some certain ways), or is it just technically impossible because of reference semantics? it's hard. especially hard when you considering inheritance (which is not playing well with templates) and yes, ref semantics. on the other side i found myself rarely using classes at all. i mostly writing templates that checks if a passed thing has all the neccessary methods and properties in place and just using that. with D metaprogramming abilities (and `alias this` trick ;-) inheritance becomes not so important. and so classes too. sometimes i'm using structs with delegate fields to simulate some sort of virtual methods 'cause i tend to constantly forgetting about that `class` thingy. ;-) OOP is overrated. at least c++-like (should i say simula-like?) OOP. ;-) I see, thanks! :) I've started liking structs more and more recently as well and been pondering on how to convert a class-based code that looks like this (only the base class has any data): class Base { T m_variable; } class Common : Base { /* tons of methods; uses m_variable */ } class Extra : Base { /* another ton of methods; uses m_variable */ } class A : Extra, Common { ... } class B : Common { ... } class C : Extra { ... } to refcounted structs with alias this but couldn't quite figure out how to do it (other than use mixin templates...). Even if the multiple alias this DIP was implemented, I don't think it would help much here :/
Re: D Beginner Trying Manual Memory Management
On Monday, 12 January 2015 at 19:29:54 UTC, jmh530 wrote: I'm new to D. I have some modest knowledge of C++, but am more familiar with scripting languages (Matlab, Python, R). D seems so much easier than C++ in a lot of ways (and I just learned about rdmd today, which is pretty cool). I am concerned about performance of D vs. C++, so I wanted to learn a little bit more about manual memory management, in case I might ever need it (not for any particular application). There is a good article on the D Wiki that covers this topic with several different patterns and working examples: http://wiki.dlang.org/Memory_Management I hope you'll find it helpful. Mike
Re: D Beginner Trying Manual Memory Management
I had seen some stuff on alias thing, but I hadn't bothered to try to understand it until now. If I'm understanding the first example a href=http://dlang.org/class.html#AliasThis;here/a, alias this let's you refer to x in s by writing either s.x (as normally) or just s. That didn't seem that interesting, but then I found a href=http://3d.benjamin-thaut.de/?p=90;example/a where they alias this'ed a struct method. That's pretty interesting. OOP seems like a good idea, but every time I've written a bunch of classes in C++ or Python, I inevitably wonder to myself why I just spent 5 times as long doing something I could do with functions. Then there's endless discussion about pimpl. On Monday, 12 January 2015 at 21:54:51 UTC, ketmar via Digitalmars-d-learn wrote: On Mon, 12 Jan 2015 21:37:27 + aldanor via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote: On Monday, 12 January 2015 at 20:30:45 UTC, ketmar via Digitalmars-d-learn wrote: it even has `RefCounted!`, but it doesn't play well with classes yet (AFAIR). I wonder if it's possible to somehow make a version of refcounted that would work with classes (even if limited/restricted in some certain ways), or is it just technically impossible because of reference semantics? it's hard. especially hard when you considering inheritance (which is not playing well with templates) and yes, ref semantics. on the other side i found myself rarely using classes at all. i mostly writing templates that checks if a passed thing has all the neccessary methods and properties in place and just using that. with D metaprogramming abilities (and `alias this` trick ;-) inheritance becomes not so important. and so classes too. sometimes i'm using structs with delegate fields to simulate some sort of virtual methods 'cause i tend to constantly forgetting about that `class` thingy. ;-) OOP is overrated. at least c++-like (should i say simula-like?) OOP. ;-)
Re: custom memory management
I asked something similar some days ago. Maybe this provides some information tat is helpful to you: http://forum.dlang.org/thread/mekdjoyejtfpafpcd...@forum.dlang.org
Re: custom memory management
On Thursday, 27 February 2014 at 21:46:17 UTC, Simon Bürger wrote: Sadly, this is incorrect as well. Because if such an object is collected by the gc, but the gc decides not to run the destructor, the buffer will never be free'd. I think you misiterpret the spec. If object is collected, destructor is guaranteed to run. But not all objects are guaranteed to be collected. For example, no collection happens at program termination. So it is OK to release resources in destructor that will be reclaimed by OS at program termination anyway. List of such resources is OS-specific but heap memory tends to be there.
Re: custom memory management
On Friday, 28 February 2014 at 10:40:17 UTC, Dicebot wrote: On Thursday, 27 February 2014 at 21:46:17 UTC, Simon Bürger wrote: Sadly, this is incorrect as well. Because if such an object is collected by the gc, but the gc decides not to run the destructor, the buffer will never be free'd. I think you misiterpret the spec. If object is collected, destructor is guaranteed to run. But not all objects are guaranteed to be collected. For example, no collection happens at program termination. So it is OK to release resources in destructor that will be reclaimed by OS at program termination anyway. List of such resources is OS-specific but heap memory tends to be there. If you are right that would mean that the current dmd/runtime does not follow the spec. Curious. The current implementation is not aware of strcut-destructors on the heap, i.e. the GC.BlkAttr.FINALIZE flag is not set for structs (or arrays of structs). In the struct-inside-a-class example, the struct destructor is called by the (automatically generated) class-destructor. The gc only knows about class-destrcutor and calls only that one directly.
Re: custom memory management
On Friday, 28 February 2014 at 12:36:48 UTC, Simon Bürger wrote: If you are right that would mean that the current dmd/runtime does not follow the spec. Curious. The current implementation is not aware of strcut-destructors on the heap, i.e. the GC.BlkAttr.FINALIZE flag is not set for structs (or arrays of structs). Ah, structs are a bit tricky. Spec has override for struct destructors that says explicitly Destructors are called when an object goes out of scope so one can argue heap ignorance matches it. But the very next line contradicts it : Their purpose is to free up resources owned by the struct object. I believe it is a DMD bug though. There is no reason why destructors can't be run here. Most likely it is just defect of current GC implementation that everyone got used to. This bug report discussion confirms it : https://d.puremagic.com/issues/show_bug.cgi?id=2834 Looks like decision was to fix it once precise GC gets added. In the struct-inside-a-class example, the struct destructor is called by the (automatically generated) class-destructor. The gc only knows about class-destructor and calls only that one directly. Yes, that matches my current observations. Did not occure before because of C coding habits :) Lucky me.
Re: custom memory management
On Friday, 28 February 2014 at 13:06:05 UTC, Namespace wrote: What can still take a long time. It annoys me very much that with arrays I can not rely on that the struct DTors are called. Yep, this bug has immediately got my vote :) It does require someone knowledgable of GC to fix in forseeable future unfortunately.
Re: custom memory management
On Friday, 28 February 2014 at 12:54:32 UTC, Dicebot wrote: On Friday, 28 February 2014 at 12:36:48 UTC, Simon Bürger wrote: If you are right that would mean that the current dmd/runtime does not follow the spec. Curious. The current implementation is not aware of strcut-destructors on the heap, i.e. the GC.BlkAttr.FINALIZE flag is not set for structs (or arrays of structs). Ah, structs are a bit tricky. Spec has override for struct destructors that says explicitly Destructors are called when an object goes out of scope so one can argue heap ignorance matches it. But the very next line contradicts it : Their purpose is to free up resources owned by the struct object. I believe it is a DMD bug though. There is no reason why destructors can't be run here. Most likely it is just defect of current GC implementation that everyone got used to. This bug report discussion confirms it : https://d.puremagic.com/issues/show_bug.cgi?id=2834 Looks like decision was to fix it once precise GC gets added. What can still take a long time. It annoys me very much that with arrays I can not rely on that the struct DTors are called.
Re: custom memory management
On Friday, 28 February 2014 at 13:32:33 UTC, Namespace wrote: I will vote, too. It's somewhat strange: Since it works with delete it should also work with the current GC, or? Someone should figure out why and how delete works this way. :) Well, delete is deprecated so it can do any kind of arcane horrors :) More idiomatic destroy + GC.free pair will work because destroy is a template function.
Re: custom memory management
On Friday, 28 February 2014 at 13:16:40 UTC, Dicebot wrote: On Friday, 28 February 2014 at 13:06:05 UTC, Namespace wrote: What can still take a long time. It annoys me very much that with arrays I can not rely on that the struct DTors are called. Yep, this bug has immediately got my vote :) It does require someone knowledgable of GC to fix in forseeable future unfortunately. I will vote, too. It's somewhat strange: Since it works with delete it should also work with the current GC, or? Someone should figure out why and how delete works this way. :)
Re: custom memory management
On Friday, 28 February 2014 at 13:38:59 UTC, Dicebot wrote: On Friday, 28 February 2014 at 13:32:33 UTC, Namespace wrote: I will vote, too. It's somewhat strange: Since it works with delete it should also work with the current GC, or? Someone should figure out why and how delete works this way. :) Well, delete is deprecated so it can do any kind of arcane horrors :) More idiomatic destroy + GC.free pair will work because destroy is a template function. No, currently it is not deprecated. It is suggested to be deprecated. :P And destroy doesn't finalize the data. :/ See: http://forum.dlang.org/thread/bug-1225...@https.d.puremagic.com%2Fissues%2F and http://forum.dlang.org/thread/bug-1227...@https.d.puremagic.com%2Fissues%2F But that is only a workaround. I don't want to call every time arr.finalize because the GC is silly... I meant that someone should analyse the internal delete code and implement something like this for the current GC related to struct arrays (and AA's).
Re: custom memory management
On Friday, 28 February 2014 at 14:08:11 UTC, Namespace wrote: No, currently it is not deprecated. It is suggested to be deprecated. :P And destroy doesn't finalize the data. :/ See: http://forum.dlang.org/thread/bug-1225...@https.d.puremagic.com%2Fissues%2F and http://forum.dlang.org/thread/bug-1227...@https.d.puremagic.com%2Fissues%2F But that is only a workaround. I don't want to call every time arr.finalize because the GC is silly... intended to be deprecated is a better word. There is not a smallest chance it will stay in the long term, better get used to it. Quick solution would have been to merge finalize with destroy itself. As I have mentioned, it is a template and has all necessary information for traversal. Proper solution will be to fix the struct destructor bug as it is the root cause for your array issues too and then patch destroy to only do traversal when pointers are not owned by GC. I meant that someone should analyse the internal delete code and implement something like this for the current GC related to struct arrays (and AA's). I am too scared of what I may find :)
Re: custom memory management
On Friday, 28 February 2014 at 14:47:31 UTC, Dicebot wrote: On Friday, 28 February 2014 at 14:08:11 UTC, Namespace wrote: No, currently it is not deprecated. It is suggested to be deprecated. :P And destroy doesn't finalize the data. :/ See: http://forum.dlang.org/thread/bug-1225...@https.d.puremagic.com%2Fissues%2F and http://forum.dlang.org/thread/bug-1227...@https.d.puremagic.com%2Fissues%2F But that is only a workaround. I don't want to call every time arr.finalize because the GC is silly... intended to be deprecated is a better word. There is not a smallest chance it will stay in the long term, better get used to it. Quick solution would have been to merge finalize with destroy itself. As I have mentioned, it is a template and has all necessary information for traversal. Proper solution will be to fix the struct destructor bug as it is the root cause for your array issues too and then patch destroy to only do traversal when pointers are not owned by GC. I'm not sure if that is possible with the current gc. But I hope it! I meant that someone should analyse the internal delete code and implement something like this for the current GC related to struct arrays (and AA's). I am too scared of what I may find :)
custom memory management
I am trying to implement a structure with value semantics which uses an internal buffer. The first approach looks like this: struct S { byte[] buf; this(int size) { buf = new byte[size]; } this(this) { buf = buf.dup; } ~this(this) { delete buf; } } This works fine as long as such an object is allocated on the stack (so the destructor is called at the end of the scope). However when the destructor is called by the gc, the buffer might already be collected, and freeing it a second time is obviously invalid. My second approach was to allocate the buffer outside the gc-managed heap, like so: this(int size) { buf = (cast(byte*)core.stdc.stdlib.malloc(size))[0..size]; } ~this(this) { core.stdc.stdlib.free(buf); } Sadly, this is incorrect as well. Because if such an object is collected by the gc, but the gc decides not to run the destructor, the buffer will never be free'd. If the gc would either always or never call struct-destructors, one of my two solutions would work. But the current situation is (in compliance with the language spec), that it is called _sometimes_, which breaks both solutions. One way the first approach could work would be for the destructor to check wether it was called by the gc, and skip the deallocation in that case. But as far as I know, the gc does not provide such a method. It would be trivial to implement, but seems kinda hackish. I know the suggested way in D is to not deallocate the buffer at all, but rely on the gc to collect it eventually. But it still puzzles me that it seems to be impossible to do. Anybody have an idea how I could make it work? thanks, simon
Re: custom memory management
A struct is a value type. So it is passed by value and is placed on the stack. { S s; } S DTor is called at the end of the scope. So you can rely on RAII as long as you use structs.
Re: custom memory management
On Thursday, 27 February 2014 at 22:04:50 UTC, Namespace wrote: A struct is a value type. So it is passed by value and is placed on the stack. { S s; } S DTor is called at the end of the scope. So you can rely on RAII as long as you use structs. On the stack yes. But not on the heap: S[] s = new S[17]; s = null; the GC will collect the memory eventually, but without calling any destructor. On the other hand: class C { S s; } C c = new c; c = null; in this case, when the gc collects the memory, it will call both destrcutors. The one of C as well as of S.
Re: custom memory management
On Thursday, 27 February 2014 at 22:15:41 UTC, Steven Schveighoffer wrote: On Thu, 27 Feb 2014 16:46:15 -0500, Simon Bürger [...] More and more, I think a thread-local flag of I'm in the GC collection cycle would be hugely advantageous -- if it doesn't already exist... I don't think it does, so I actually implemented it myself (not thread-local, but same locking as the rest of the gc): github.com/Krox/druntime/commit/38b718f1dcf08ab8dabb6eed10ff1073e215890f . But now that you mention it, a thread-local flag might be better.
Re: custom memory management
On Thu, 27 Feb 2014 16:46:15 -0500, Simon Bürger simon.buer...@rwth-aachen.de wrote: I know the suggested way in D is to not deallocate the buffer at all, but rely on the gc to collect it eventually. But it still puzzles me that it seems to be impossible to do. Anybody have an idea how I could make it work? Unfortunately, nothing is foolproof. The most correct solution is likely to use malloc/free. Yes, if you just new one of these, you will have to destroy it. But if you have a destructor that uses GC allocated memory such an object can NEVER be a member of a heap-allocated class. More and more, I think a thread-local flag of I'm in the GC collection cycle would be hugely advantageous -- if it doesn't already exist... -Steve
Rust style memory management in D?
I've been looking into alternatives to C++ and have been following D since back in the D1 Tango/Phobos days, and recently started digging in again. I'm quite impressed with the progress, and I've started a simple toy game project to test out some of the language features. One thing that really bothers me as someone who works on applications where responsiveness is critical and even 50ms delays can cause problems is the GC though - It seems to be that the only really good answer is to to just use malloc free from the standard library, but that seems really awful for such an elegant language. Every area but memory management is beautiful and can do what you like, but trying to avoid the GC is downright painful and forces you to ditch much of the safety D provides. Given that it is designed to be a systems programming language in such applications memory management is so important, this seems like a tremendous oversight. In my searching, I ran across Rust, which is relatively new and takes a rather different approach to a lot of things, including memory management - making unique shared pointers(~ is used to denote them) a language feature along side garbage collection so that the compiler can enforce ownership rules. One of the main language developers has noted that the unique pointer with ownership transfer rules is used so much more than the GC option that he's trying to get GC removed from the language and placed in the standard library. Given D's target domain of high performance systems programming, this memory management model seems like a radically better fit than screw it, we'll GC everything. I've seen a few other people talk about this issue, and the difficulty of avoiding the GC seems to be THE argument against D that I wind up seeing. Has such a model been considered, and is there a reason (besides the fact that the entire standard library would probably have to be rewritten) that it isn't used?
Re: Rust style memory management in D?
Please post on[0] regarding better memory management. As currently work is being done on rewriting the GC (which really was needed). [0] http://forum.dlang.org/post/lao9fn$1d70$1...@digitalmars.com
Re: non-determinant object lifetime and memory management
On Sunday, 1 December 2013 at 02:29:42 UTC, bioinfornatics wrote: On Saturday, 30 November 2013 at 08:35:23 UTC, Frustrated wrote: I need to pass around some objects(specifically int[]) that may be used by several other objects at the same time. While I could clone these and free them when the parent object is done this wastes memory for no real reason except ease of use. Since many objects may contain a ptr to the array, what would be the best way to deal with deallocating them? I could wrap the array in a collection an use ARC but is there a better way? Is there something in std.allocators that can help? (Should be obvious that I'm trying to avoid the GC) Why you do not use one of this way: - const ref int[]… into function parameter - using shared/synchronized and ref to array It would seem if I am going to use some way it needs to be consistent. The first case would require creating []'s outside of the function which then doesn't solve the original problem. I'm not sure how the second case solves anything?
non-determinant object lifetime and memory management
I need to pass around some objects(specifically int[]) that may be used by several other objects at the same time. While I could clone these and free them when the parent object is done this wastes memory for no real reason except ease of use. Since many objects may contain a ptr to the array, what would be the best way to deal with deallocating them? I could wrap the array in a collection an use ARC but is there a better way? Is there something in std.allocators that can help? (Should be obvious that I'm trying to avoid the GC)
Re: non-determinant object lifetime and memory management
Frustrated: I need to pass around some objects(specifically int[]) that may be used by several other objects at the same time. While I could clone these and free them when the parent object is done this wastes memory for no real reason except ease of use. Since many objects may contain a ptr to the array, what would be the best way to deal with deallocating them? I could wrap the array in a collection an use ARC but is there a better way? Is there something in std.allocators that can help? (Should be obvious that I'm trying to avoid the GC) Your use case seems fit for using the GC. Otherwise take a look at std.typecons.RefCounted, to be used in a wrapper that uses alias this. Bye, bearophile
Re: non-determinant object lifetime and memory management
On Saturday, 30 November 2013 at 08:35:23 UTC, Frustrated wrote: I need to pass around some objects(specifically int[]) that may be used by several other objects at the same time. While I could clone these and free them when the parent object is done this wastes memory for no real reason except ease of use. Since many objects may contain a ptr to the array, what would be the best way to deal with deallocating them? I could wrap the array in a collection an use ARC but is there a better way? Is there something in std.allocators that can help? (Should be obvious that I'm trying to avoid the GC) You can use the Array type in std.container [1]. It uses ref counting and the C heap internally. [1] http://dlang.org/phobos/std_container.html#.Array
Re: non-determinant object lifetime and memory management
On Saturday, 30 November 2013 at 12:51:46 UTC, Rene Zwanenburg wrote: On Saturday, 30 November 2013 at 08:35:23 UTC, Frustrated wrote: I need to pass around some objects(specifically int[]) that may be used by several other objects at the same time. While I could clone these and free them when the parent object is done this wastes memory for no real reason except ease of use. Since many objects may contain a ptr to the array, what would be the best way to deal with deallocating them? I could wrap the array in a collection an use ARC but is there a better way? Is there something in std.allocators that can help? (Should be obvious that I'm trying to avoid the GC) You can use the Array type in std.container [1]. It uses ref counting and the C heap internally. [1] http://dlang.org/phobos/std_container.html#.Array How does it work? When you call clear it decrements the reference count and at 0 it free's the memory? That is the only self managed container in std.container?