Walter Bright wrote:
3. (2) has a great advantage in doing message passing between threads. This model was popularized by Erlang, and is very successful. You can do message passing without immutable references, but you've got to hope and pray that your programming team didn't make any mistakes with it. With immutable references, you have a statically enforced guarantee. Value types (and immutable references) do not need synchronization.

But you need to allocate this data from a shared garbage collection,

You do anyway.

(By the way the thought of allocating data from a shared heap to avoid synchronization is ridiculous. Even more so because the shared heap uses a single global lock, and requires to stop all threads in the process to run a gc mark cycle for freeing memory. But we'll see how it will be implemented eventually.)

Data that is neither immutable nor shared can be allocated from a thread local heap. If references to immutable data can be passed to other threads, you'll have to allocate all immutable data from a shared heap. This will slow down even normal uses. Or you copy immutable data as it "escapes" to other threads.

I'm only speculating here (I don't even claim to be knowledgeable or an "expert" on this topic), but I think one can conclude that it isn't as simple as it sounds. Is it? If you're using thread local heaps (which is going to be essential for performance on multicores), you can't just hand over immutable data to other threads and expect it either to be working or to be performant.

It's also unnerving to see immutable being tossed around as the ultimate solution for multithreading without visible results. Meanwhile, Go has excellent multithreading support TODAY, and I don't think it has immutable.

which again slow down the whole thing. Is there really an advantage over copying?

Copying will invoke the garbage collector. Since you argued that is slow, then avoiding the necessity of doing so will make it faster.

I could imagine that it would be faster to transfer data between threads using a static shared buffer (like with unix pipes and processes), than with a shared buffer. You would be able to allocate only thread local heaps, which may be a lot more efficient than shared heaps, especially when triggering garbage collection cycle.


4. immutability and purity enable user reasoning about a program. Otherwise, you have to rely on the (probably wrong) documentation about a function to see what its effects are, and if it has any side effects.

If the program logic gets more complicated because of the type system, this isn't going to help much. Now I see you applying language hacks like DIP2 to reduce the damage. Is there an end to it?

C function prototypes increased the complexity, but it was darn well worth it. You can either have more complexity in the language, or you can spend endless hours manually checking to see if convention was followed - and even then you can't be sure.

That's good when it was worth it.


Yes, there was a recently discovered bug which enabled modifying an immutable array. This was a bug, and has been fixed. A bug does not mean the concept is broken.

Sure, but the question is: will all those bugs ever to be fixed?

Forgive me, but every month 20 to 40 bugs get fixed. You can see it in the change log. I don't understand these complaints.

I'm sorry and no offense, but I'd say the quality of the D toolchain is rather bad. Even beginners can hit compiler bugs. You need some time until you work around the "weak" parts of dmd without thinking about it. (My favourite class of dmd bugs are forward reference errors and bogus symbol lookups when using renamed/selective imports or having circular module imports. They are getting fixed, but they never seem to die down completely.)

Of course it's not that bad, and the toolchain isn't utter garbage (except OPTLINK). But it could be... better. I wonder how many people get scared away from D by the fact alone that dmd only supports OMF on Windows, and that they have to go through a lot of stuff to link to DLLs, let alone static libraries... things like this don't get any attention or even acknowledgment from you. You'd probably say "use this COFF to OMF converter" (yeah thanks it didn't work), or "compile your library with DMC" (uh...)... anyway, it's the same with some bugs which seem to be considered low priority.

And what matters is not the number of fixed bugs, but the number of unfixed bugs. If you keep adding new features to a compiler, the number of bugs won't go down.

I'm not saying that you're not fixing enough bugs, I'm just wondering how this situation can seriously go on as it is. I mean, I don't even care (I got used to it and dmd compiles my code fine), and I don't really want to put any pressure on you, but sometimes I think I'm watching a train derailing in slow motion, just that you don't know whether the train will really derail, or if it will make it.

We have immutable since over 2 years, but we're still adding features to make using this feature not a pain? Even though D2 is being finalized? Something is wrong here.


Also, how much is this reliability worth if you can just cast away immutable? It's even exactly the same syntax you have to use for relatively harmless things, like casting a float to an integer.

It's not allowed in @safe functions.

I hope it's clear that the current syntax for casting immutable is dangerous. For example, if a change to a type declaration turns a dynamic cast into an immutable cast somewhere else, the compiler will say nothing.

It's almost as if using n...@safe is a deprecated feature that should only be used for low-level optimizations or runtime stuff, and language developers don't care if there are evil programmer traps in u...@safe parts of the language.

I don't like this thought of u...@safe features being second class citizens in D. E.g. is it even possible to make memory-intensive programs performant without resorting to manual memory allocation? (As opposed to relying on the current D GC, which performs rather badly.)

Reply via email to