Re: Why is GC.collect `pure`
On Wednesday, August 2, 2023 12:02:35 PM MDT Nick Treleaven via Digitalmars-d- learn wrote: > On Wednesday, 2 August 2023 at 17:55:12 UTC, Nick Treleaven wrote: > > On Wednesday, 2 August 2023 at 17:52:00 UTC, Nick Treleaven > > > > wrote: > >> Now I'm wondering why those functions are marked `pure` - they > >> must affect the GC's bookkeeping state. > > I guess it was because the GC's internal state is not supposed to > be observable outside internal GC functions. I find it harder to > accept some of those than `GC.malloc` being pure, because > GC.disable and GC.enable will affect how long future allocations > will take. That latency can be significant and observed by the > program. Also conceptually they are changing GC state. Well, affecting how long something takes doesn't have anything to do with pure. Another process running on the box could have the same effect. Whether a function can be pure or not is strictly a matter of whether it's possible for it to access any non-immutable data that wasn't passed to it via its arguments. Of course, the whole question of pure gets weird with the GC, because we want to be able to treat GC allocations as pure when in fact they do mutate the GC's state when the GC was not passed in via a function argument. So, when dealing with the GC, purity becomes a question of maintaining the guarantees that the compiler expects with regards to pure and allocations rather than the more straightforward question of whether the function can access any non-immutable data from anything outside of itself via anything other than its arguments. In general, the GC's state is essentially treated as being separate from that of the program itself and thus irrelevant to stuff like pure. As far as the state of the program itself is concerned, the GC could allocate with new and then never bother to free anything, or it could be running a collection every single time new is called - or anything in between. As far as D is concerned, none of that matters to the state of the actual program. It's just a GC concern. That being said, of course, we do need to be careful when dealing directly with GC functions, because we don't want what the compiler does based on pure to end up having undesirable side effects with regards to the GC. As such, whenever deciding whether such functions can be pure or not, we need to carefully consider what the compiler will potentially do based on pure. So, remember that the most that the compiler will do with pure is optimize out multiple calls to the same strongly pure function within a single expression where each call has the exact same arguments. The compiler will also use that information to determine whether a value might be unique or not so that it can determine whether it's safe to convert mutable data to immutable, but that's primarily a type system concern rather than a runtime one. As such, the question of whether it's safe to make a GC function pure essentially comes down to the question of what would happen if you have an expression such as foo(12) * foo(12) which ends up being optimized down to one call to foo instead of two, because foo is strongly pure. And remember that because foo is strongly pure, its arguments are immutable (or were implicitly converted to immutable), and thus its execution in both cases would be identical. So, the exact same sequence of calls to GC functions would occur in each call to foo. So, we don't have to worry about something like the GC being enabled in one call to foo but disabled in the other. The functions that you referred to were GC.collect, GC. minimize, GC.enable, and GC.disable. So, the question becomes how (if at all) it affects the state of the program itself if the number of calls to those functions changes due to a call to foo being optimized out. And it shouldn't take much to see that it doesn't matter. Calling enable or disable multiple times in a row would just result in extraneous calls that do nothing, so optimizing that down to a single call wouldn't matter. Calling minimize multiple times would similarly not matter at all. It's highly unlikely that multiple calls to minimize within a short period of time would make any difference over a single call, and even if it did, it would just be affecting how much free memory the GC had. It would have no effect on the state of the program itself (and remember that as far as the rest of the program is concerned, the state of the GC doesn't even exist; the program's semantics would be the same even if new always grabbed more memory from the OS, and collections never did anything). Now, GC.collect is a bigger question, because that can affect when objects are actually destroyed, which obviously can affect the state of the program outside of the GC based on what the destructors involved do. So, that _can_ affect the program outside of memory allocations. However, when that happens is already effectively random, and it isn't even guaranteed that it will ever
Re: Why is GC.collect `pure`
On Wednesday, 2 August 2023 at 17:55:12 UTC, Nick Treleaven wrote: On Wednesday, 2 August 2023 at 17:52:00 UTC, Nick Treleaven wrote: Now I'm wondering why those functions are marked `pure` - they must affect the GC's bookkeeping state. I guess it was because the GC's internal state is not supposed to be observable outside internal GC functions. I find it harder to accept some of those than `GC.malloc` being pure, because GC.disable and GC.enable will affect how long future allocations will take. That latency can be significant and observed by the program. Also conceptually they are changing GC state.
Re: Why is GC.collect `pure`
On Wednesday, 2 August 2023 at 17:52:00 UTC, Nick Treleaven wrote: Now I'm wondering why those functions are marked `pure` - they must affect the GC's bookkeeping state. Here's the pull that added it: https://github.com/dlang/druntime/pull/3561
Why is this pure function taking a string literal not CTFE-executable?
Hi Guys! In my programm, I have a custom String-type that I want to initialize some variables of at compile time by casting a string literal to said custom String type. I thought I could achieve this straight forwardly, but after trying a bit, I could not find a (simple) working solution. I made this minimal example to show where the easy solution all fall flat: struct My_String{ long size; char* data; } My_String make_my_string(string s){ My_String my_string; my_string.data = cast(char*) s.ptr; my_string.size = s.length; return my_string; } struct Dummy{ My_String s = make_my_string("hello!"); } void main(){ Dummy dummy; } Which produces the compilation error "cannot use non-constant CTFE pointer in an initializer My_String(6L, &"hello!"[0])". I do not understand this error message. What is the non-constant CTFE pointer here. The "data"-member? If so, why does this compile: struct My_String{ long size; char* data; } struct Dummy{ My_String s = My_String("hello!".length, cast(char*) "hello!".ptr); } void main(){ Dummy dummy; } Why does the error message show an opcall to My_String with filled out members ("6L, &"hello!"[0]"), although the code only ever default-constructs a My_string variable? I am confused. And why on earth does this work: struct My_String{ long size; char* data; } My_String make_my_string(string s){ My_String my_string; my_string.data = cast(char*) s.ptr; my_string.size = s.length; return my_string; } void main(){ My_String s = make_my_string("hello!"); } Please help, I have no idea whats going on here.
Re: Why is my @pure function @system when placed in a struct?
On 27.02.19 19:10, Dukc wrote: I tested a bit, and it appears that attribute inference is not done at all for templates inside structs -the attribute need not be a delegate: struct S { static int fImpl(Ret)() { return Ret.init; } pragma(msg, __traits(getFunctionAttributes, fImpl!int)); // still tells us: `f` is @system } void main(){} A bug, unless I'm overlooking something. It's not quite as simple as that. When you put the pragma in a function, the inferred attributes show up: struct S { void f()() {} } pragma(msg, __traits(getFunctionAttributes, S.f!())); /* @system */ void g() { pragma(msg, __traits(getFunctionAttributes, S.f!())); /* Same line now says @safe. */ } But I agree that this can't be right.
Re: Why is my @pure function @system when placed in a struct?
On Wednesday, 27 February 2019 at 17:23:21 UTC, Q. Schroll wrote: For whatever reason, when I put the code in a struct, the @safe testing line tells me, it's @system now. I tested a bit, and it appears that attribute inference is not done at all for templates inside structs -the attribute need not be a delegate: struct S { static int fImpl(Ret)() { return Ret.init; } pragma(msg, __traits(getFunctionAttributes, fImpl!int)); // still tells us: `f` is @system } void main(){} A bug, unless I'm overlooking something.
Re: Why is my @pure function @system when placed in a struct?
On Wednesday, 27 February 2019 at 18:06:49 UTC, Stefan Koch wrote: the struct gets drawn into your delegate-context. and I guess that taints the function. Even if it did, it should not make the delegate @system. And it does not, since this manifest with static functions and function pointers too.
Re: Why is my @pure function @system when placed in a struct?
On Wednesday, 27 February 2019 at 17:23:21 UTC, Q. Schroll wrote: I have a template function `fImpl` I whish to instantiate manually using the new name `f`. Reason is simple: `f` should not be a template, but overloading it makes it easier that way. Nothing's more simple in D: [...] the struct gets drawn into your delegate-context. and I guess that taints the function.
Why is my @pure function @system when placed in a struct?
I have a template function `fImpl` I whish to instantiate manually using the new name `f`. Reason is simple: `f` should not be a template, but overloading it makes it easier that way. Nothing's more simple in D: int fImpl(T)(T value) { return cast(int) value; } alias f = fImpl!int; alias f = fImpl!long; It works perfectly used like that. In my case, `T` isn't just a simple type, it's a delegate type. So it's rather like this: alias BaseDG = int delegate(ref int); int fImpl(DG : BaseDG)(scope DG callback) { // NB: this is @safe iff callback is @safe int x = 0; return callback(x); } alias myDG = int delegate(ref int) @safe; alias f = fImpl!myDG; When I ask the compiler, if `f` is @safe, it tells me: Hurray, it is! pragma(msg, __traits(getFunctionAttributes, f)); // tells me: `f` is @safe For whatever reason, when I put the code in a struct, the @safe testing line tells me, it's @system now. struct S { // static: // static or not does not matter alias BaseDG = int delegate(ref int); int fImpl(DG : BaseDG)(scope DG callback) { return 0; } alias myDG = int delegate(ref int) @system; alias f = fImpl!myDG; pragma(msg, __traits(getFunctionAttributes, f)); // tells me: `f` is @system } I have no idea why. It is irrelevant if the function template is `static` or even does not call the callback.
Why is this pure?
The following program compiles, and does what you'd expect: struct A { int a; } pure int func( ref A a ) { return a.a += 3; } As far as I can tell, however, it shouldn't. I don't see how or why func can possibly be considered pure, as it changes a state external to the function. What am I missing? Or is this just a compiler bug? Shachar
Re: Why is this pure?
On Monday, 25 August 2014 at 06:27:00 UTC, Shachar wrote: The following program compiles, and does what you'd expect: struct A { int a; } pure int func( ref A a ) { return a.a += 3; } As far as I can tell, however, it shouldn't. I don't see how or why func can possibly be considered pure, as it changes a state external to the function. What am I missing? Or is this just a compiler bug? Shachar http://klickverbot.at/blog/2012/05/purity-in-d/