On Friday, 5 January 2018 at 11:49:44 UTC, Joakim wrote:
Yes, but when I pointed out that it's fine to think you're the best as long as you stay focused on bettering the flaws you still have,
I don't think that thinking you're the best brings anything but disadvantages, actually… Except when you are advertising perhaps.
What would be better, a million JS programmers with 10k great ones who "grow your infrastructure," or 150k D programmers with 30k great ones doing the same? Holding everything else equivalent proportionally, I'd say the latter.
Well, that is not the scale we are talking about here, but actually the former is the better if it means that you get twice as many that are paid to grow the eco system full time. If you compare JavaScript with D on that metric you are closer to a 1000:1 ratio or more in JavaScript's favour… Not a fair comparison, but that's the reality that drives adoption.
When you reach critical mass within a specific domain a lot more people will be doing full-time eco system development… (Facebook, Google, Microsoft + a very large number of smaller companies).
The browser domain is very large, so not a fair comparison, of course.
speeding up a fundamentally slow and inefficient language design, which a core team of great programmers wouldn't have put out there in the first place. :P
Doesn't really matter in most cases. The proper metric is "sufficient for that task at hand". So, if many people are paid to do full-time eco system development that also means that the tool is sufficient for a large number of use cases…
You can do the same in browsers as people do with Python. Use something else for those parts where speed is paramount: stream from a server, use WebGL or WebAssembly, or use the browser engine cleverly. For instance I've implemented instant text search using CSS for even IE9, the browser engines were tuned for it, so it was fast.
People use the same kind of thinking with C++/D/Rust as well, i.e use the GPU when the CPU is too slow, or use a cluster, or use a database lookup…
Bother computer hardware and the very capable internet connections people have (at least in the west) are changing the equations.
It is easier to think about a single entity like a programming language with a small set of isolated great programmers writing an application that will run on an isolated CPU, but the world is much more complicated now than it used to be in terms of options and the environment for computing.
of programming. As for "full stack," a meaningless term which I bet actually has less CS bachelors than the percentage I gave. ;)
I understand "full stack" to mean that you can quickly adapt to doing database, server, client and GUI development.
Saying that most C++ programmers also use python implies that having two tools that you choose from is enough. In that case, you're basically agreeing with me, despite your previous statements otherwise, as I was saying we can't expect most programmers to learn more than one tool, ie a single language.
No. The programming style for Python is very different from C++. Just because many C++ programmers also know Python, doesn't mean that they don't benefit from also knowing other languages. I bet many also know JavaScript and basic Lisp.
Could be, but that changes nothing about the reality that most programmers are just going to pick one language and some set of frameworks that are commonly used with it.
Ok. I personally don't find the standard libraries for Java and C# difficult to pick up as I go. Google gives very good hits on those.
Application frameworks are more tedious, but they are also more quickly outdated. So you might have to learn or relearn one for a new project anyway.
That is the current reality, it doesn't matter what hypothetical kind of development you have in mind.
That's not an argument… That is just a unfounded claim.
D users are the exception that prove the rule, the much larger majority not using D because they're already entrenched in their language.
I don't really think D users are as exceptional as they often claim in these forums…
when you _couldn't_ to entrench themselves in that market. And as I've pointed out to you before, they're still much more efficient, so until battery tech get much better, you're going to want to stick with those natively-compiled languages.
Not convinced. I think most people on smartphones spend a lot of time accessing browser-driven displays. Either in the actual browser or as a widget in an app…
It doesn't really matter, because the dominating activity isn't doing heavy things like theorem proving or data-mining…
As long as the code for display rendering is efficient, but that is mostly GPU.
Which of those python-glued GUI apps has become popular? That was the question: I can find toy apps in any language, that nobody uses.
That's not right. Scripting is commonly used for the high level layer in productivity applications.
I've given you an estimate of D users, you haven't said why that hasn't passed your critical mass threshold.
What estimate? Based on what? Using what method? Does it hold up on Github? If not, what is the explanation?
