> seem to be implying that the GC is going to be completely removed for 1.0

Sorry, I am not implying this at all. I am just saying that 1.0 will be 
released with a mechanism for memory management (GC) which is already in the 
stage of being replaced. This is bad for two reasons:

  * people will not invest in this language, since they know they are buying 
something that is somehow on the verge of obsolescence. For how long will GC be 
available when the new runtime is deemed to be stable?
  * the standard library, which should be providing examples of good code, will 
be littered with annotations that are for the new runtime and ignored with the 
current runtime. People will read it and be puzzled.



> By this reasoning, the only language you should use are languages that have 
> been formally verified like Ada Spark, Idris, or APL

Sorry again, I did not explain myself. I did not mean formal verification as in 
a computer verifying the correctness of code. I was talking about proving as in 
people doing the proofs.

For instance, you mention how Haskell is based on many research articles. Well, 
most of them come with (human) proof of correctness of the mechanism that they 
propose. Many GC algorithms also have been proved correct, its not something 
especially difficult.

But the way the new runtime is developing is people finding new examples where 
the proposed mechanism leaves dangling pointers, or frees memory too early and 
so on. This is good, but it should be a phase of experimentation - even on 
paper. Then, when one is sure that the mechanism is sound (does not leave 
dangling pointers and so on) it should not be too difficult to prove it correct 
(on paper again, I am not talking about writing an interpreter in Coq).

Reply via email to