On Monday, 16 May 2022 at 15:08:15 UTC, H. S. Teoh wrote:
If you find yourself having to cast to/from immutable, you're
using it wrong.
I clearly was, which is why I'm not using it anymore.
The question was "What are you stuck at? What was the most
difficult features to understand? etc.", so I listed the things
that tripped me up in my early time with D. I assumed the
context of this was collecting useful information for, say,
understanding what newcomers' sticking points are and maybe
thinking how to make D more accessible to them. Maybe this
wasn't the intent, but this kind of comes across more as a
lecture on how everything I was doing wrong is my own fault even
when I've already stated I'm *not doing those things anymore*.
I write D programs that process input files all the time,
including stuffing data into AAs and what-not, and it Just
Works(tm).
Of course it just works, why wouldn't it?
But not once did I bother with immutable (why should I?).
Because the documentation crams it down your throat. Or at least
it did, along with the early blog/wiki posts that were the only
source of D information I could find back at the time. And yes,
I pored over that page on the differences between const and
immutable years ago, which sang immutable's praises at length,
how it was D's greatest innovation to set it apart from C++, and
implied it should be slapped on literally everything that never
changes and how great this was for multithreading, hence why I
(formerly) felt the need to try and define all my persistent
definitions as immutable. Now I slap __gshared on everything and
just let it go.
For anything performance-related, I don't even look at dmd, I
use LDC all the way. DMD is only useful for fast
compile-run-debug cycle, I don't even look at performance
numbers for DMD-produced executables, it doesn't mean anything
to me.
According to the dlang.org wiki entry for LDC:
druntime/Phobos support is most likely lacking (and probably
requires a new version predefined by the compiler).
So I'm not touching it for now. Don't have time to investigate a
completely new compiler and track down incompatibilities when DMD
works and I've been invested in it. The fact that LDC is more
performant than DMD surely not does imply attempting to optimize
DMD-compiled programs is futile. My use of immutable was based
in part on a (mistaken) assumption that this was a recommended
best practice for multithreading safety and performance.
Just make things private and use getters/setters to control
access. Like I said, if you find yourself writing lots of casts
Equally irritating. Direct access is far less annoying to write.
Yes, I could use mixins to automate some of this but it still
uglifies the code even more to no great advantage for my
purposes. Getters/setters have always felt relevant only to
large team environments, when you can't count on another
developer knowing certain things aren't meant to be touched. For
simple self-protection, I like C#'s "readonly" keyword, as I
said. Statement of opinion. I wish D had something similarly
painless to use, but it doesn't, so I'm out of luck. I just
direct access and try to not make mistakes instead.
Why would you want to force deterministic memory management onto
GC-allocated objects? Just use malloc/free (or whatever else
Because when I first got into D this was *not made clear*. The
GC was sold so heavily and every single non-GC approach to memory
management was made to look like, at best, a crude circumvention,
and at worst a pariah. In C++ I could new/delete, in ObjC I had
alloc/autorelease, D had new, and also delete at the time, which
was not adequately explained that it pretty much did nothing, and
even then it (and its hacky replacement __delete) were bugged
anyway. Perhaps this was my own ignorance, fair enough. NOW I
already use malloc/free for everything important. But is this
helpful to future newcomers? Wait till they screw up, don't
understand why, then tell them after the fact they were doing it
wrong, if they're still around by that point? To this day, as
your post demonstrates, the GC is still extolled as the correct
solution to 99% of problems, with alternatives only grudgingly
admitted to after a person reports running into problems.
On top of this, at the time I got into D, the GC just plain
DIDN'T work for arbitrary chunks of data unless you carefully
zeroed them out. I could easily write trivial programs to read a
binary file's raw data into an array of ubyte[], let it go out of
scope without any escaped pointers, and crash with out of memory
as the loop cheerfully ate up 8+ GB. The move to 64-bit and
whatever bugfixes may have happened in the past years helped and
fortunately everything "Just Works" NOW. But back then, it
didn't. So when on top of this my programs started having
disastrous performance issues due to GC collections because the
delete I was mistakenly counting on actually did nothing, I
scrambled for a solution and was shown GC.free, because the
concept of looking up and importing std.core.etc.c.whoever and
just using malloc/free was still treated with distaste, like
there was just absolutely no time to spend on anyone who didn't
fully embrace D's GC ethos. "Oh, you don't like GCs? You just
don't get them, go use C instead" wasn't helpful.
When a new programmer walks up and says "How do I create an
object?", what do you tell them? Do you say "Oh, that's easy,
you just type:"
```d
import core.memory;
import core.stdc.stdlib : malloc, free;
import core.lifetime : emplace;
T NEW(T, Args...)(auto ref Args args) if (is(T == class)) {
enum size = __traits(classInstanceSize, T);
void* mem = malloc(size);
scope(failure) free(mem);
return mem !is null ? emplace!T(mem[0..size], args) : null;
}
void FREE(T)(ref T obj) if (is(T == class)) {
auto mem = cast(void*) obj;
destroy(obj);
obj = null;
free(mem);
}
auto foo = NEW!Foo;
```
"Simple as that!" No, of course not. You say `auto foo = new
Foo;` and that's the end of the discussion. And then they write
a little program that runs for 5ms and everything is fine. And
then they write a bigger program that runs for 6 hours and come
back and say "Hey, this is running really poorly, how do I
manually delete objects?" and the answer is "Oh, *you've been
doing it wrong.* Why are you trying to do it that way?"
Slap @nogc on main() and go from there, if that helps.
No, it doesn't. If I'm going to slap @nogc on main there's no
reason to use D anymore because ~60% of the features I like about
D are dependant on the GC, even if they don't end up partaking in
it. Among many other things, countless lines of writef that
aren't @nogc. Neither is calling formattedWrite into a
std.container.array, or sformat with a preallocated buffer. I
don't know if there's yet another solution but at this point I
stop caring. I'm not going to spend days rewriting my own
implementation of format. Slapping @nogc on main doesn't solve
problems, it creates them. Some of the features I like may
require the GC to exist, but if they don't actually NEED to use
it, I'm not going to be shy about using them in a way such that
they don't, even if they can't be made @nogc. If writef throws
an exception, I probably have bigger problems to worry about than
that allocation.
IME, most programs don't actually need to care whether their
memory is manually-managed or GC'd.
Mine do.
then I either preallocate or use malloc/free.
Which I do now. Years after making the mistake that blindly
trusting D's holy GC was meant to solve every problem. So,
having had this leave a pretty bad taste in my mouth, I now avoid
the GC whenever possible, even when I don't have to. Maybe a
run-and-done program can get along just fine allocating
everything to the GC. But maybe I'll need to modularize it some
day in the future and call it from another program with far more
intensive requirements that doesn't want superfluous data being
added to the GC every frame. Far better to just keep your house
clean every day than let the trash pile up and wait for the maid
to come, IMO. Inevitably it's going to be her day off when you
have guests coming over.