On Friday, 16 March 2018 at 22:25:50 UTC, jmh530 wrote:
This sort of analysis applies to programming languages in exactly the same way. If I'm a company, do I build products using language X or language Y. If I'm a person, do I spend N hours learning language X or language Y (or do the next best thing you can do...March Madness?). What if I already know language X? Then it's pure marginal cost to learn language Y.
For me I'd say learning the language is the low cost. The high cost is in finding the tooling, the eco system, maintaining the configuration and being sure that it is supported over time and that it can target the platform I am likely to be interested in (now and an in the future).
So, I write toy programs for new languages to see what they are like out of curiosity, but I am not likely to adopt any language that doesn't have a dedicated IDE. I'm not interested in taking a hit on tooling-related costs.
That last part has actually makes me reluctant to adopt Rust, Dart and Angular. So I'd say the threshold for moving from "non-critical" usage to "critical" usage is quit high.
On the other hand I have a lot less resistance to adopting TypeScript, since it is a fairly thin layer over Javascript. Thus I can easily move away from it if it turns out to be limiting down the road.
C programmers don't just switch to D or Rust or whatever the moment they see it has some "technical" features that are better. That's not what we observe. The marginal benefit has to exceed the marginal cost.
Actually, I'd say no marginal benefit is worth moving away from a platform with quality tooling.
So you have to win on productivity and performance (memory, portability, speed).
