On Sunday, 2 September 2018 at 19:30:58 UTC, Nick Sabalausky (Abscissa) wrote:
On 09/02/2018 05:43 AM, Joakim wrote:
Most will be out of business within a decade or two, as online learning takes their place.

I kinda wish I could agree with that, but schools are too much of a sacred cow to be going anywhere anytime soon. And for that matter, the online ones still have to tackle many of the same challenges anyway, WRT successful and effective teaching.

Really the only difference is "physical classroom vs no physical classroom". Well, that and maybe price, but the community colleges have had the uni's well beat on price for a long time (even manage to do a good job teaching certain things, depending on the instructor), but they haven't made the uni's budge: The best they've been able to do is establish themselves as a supplement to the uni's, where people start out with some of their gen-ed classes at the (comparatively) cheap community colleges for the specific purpose of later transferring to a uni.

That's because what the current online efforts do is simply slap the in-class curricula online, whereas what really needs to be done is completely change what's taught, away from the incoherent mix of theory and Java that basically describes every degree (non-CS too), and how it's tested and certified. When that happens, the unis will collapse, because online learning will be so much better at a fraction of the cost.

As for sacred cows, the newspaper business was one of them, ie Journalism, but it's on death's door, as I pointed out in this forum years ago:

https://en.m.wikipedia.org/wiki/File:Naa_newspaper_ad_revenue.svg

There are a lot of sacred cows getting butchered by the internet, college will be one of the easier ones to get rid of.

On Sunday, 2 September 2018 at 21:07:20 UTC, Nick Sabalausky (Abscissa) wrote:
On 09/01/2018 03:47 PM, Everlast wrote:

It's because programming is done completely wrong. All we do is program like it's 1952 all wrapped up in a nice box and bow tie. WE should have tools and a compiler design that all work interconnected with complete graphical interfaces that aren't based in the text gui world(an IDE is just a fancy text editor). I'm talking about 3D code representation using graphics so projects can be navigated  visually in a dynamic way and many other things.

There are really two main, but largely independent, aspects to what you're describing: Visual representation, and physical interface:

A. Visual representation:
-------------------------

By visual representation, I mean "some kind of text, or UML-ish diagrams, or 3D environment, etc".

What's important to keep in mind here is: The *fundamental concepts* involved in programming are inherently abstract, and thus equally applicable to whatever visual representation is used.

If you're going to make a diagram-based or VR-based programming tool, it will still be using the same fundamental concepts that are already established in text-based programming: Imperative loops, conditionals and variables. Functional/declarative immutability, purity and high-order funcs. Encapsulation. Pipelines (like ranges). Etc. And indeed, all GUI based programming tools have worked this way. Because how *else* are they going to work?

If what you're really looking for is something that replaces or transcends all of those existing, fundamental programming concepts, then what you're *really* looking for is a new fundamental programming concept, not a visual representation. And ance you DO invent a new fundamental programming concept, being abstract, it will again be applicable to a variety of possible visual representations.

That said, it is true some concepts may be more readily amenable to certain visual representations than others. But, at least for all the currently-known concepts, any combination of concept and representation can certainly be made to work.

B. Physical interface:
----------------------

By this I mean both actual input devices (keyboards, controllers, pointing devices) and also the mappings from their affordances (ie, what you can do with them: push button x, tilt stick's axis Y, point, move, rotate...) to specific actions taken on the visual representation (navigate, modify, etc.)

The mappings, of course, tend to be highly dependant on the visual representation (although, theoretically, they don't strictly HAVE to be). The devices themselves, less so: For example, many of us use a pointing device to help us navigate text. Meanwhile, 3D modelers/animators find it's MUCH more efficient to deal with their 3D models and environments by including heavy use of the keyboard in their workflow instead of *just* a mouse and/or wacom alone.

An important point here, is that using a keyboard has a tendency to be much more efficient for a much wider range of interactions than, say, a pointing device, like a mouse or touchscreen. There are some things a mouse or touchscreen is better at (ie, pointing and learning curve), but even on a touchscreen, pointing takes more time than pushing a button and is somewhat less composable with additional actions than, again, pushing/holding a key on a keyboard.

This means that while pointing, and indeed, direct manipulation in general, can be very beneficial in an interface, placing too much reliance on it will actually make the user LESS productive.

The result:
-----------

For programming to transcend the current text/language model, *without* harming either productivity or programming power (as all attempts so far have done), we will first need to invent entirely new high-level concepts which are simultaneously both simple/high-level enough AND powerful enough to obsolete most of the nitty-gritty lower-level concepts we programmers still need to deal with on a regular basis.

And once we do that, those new super-programming concepts (being the abstract concepts that they inherently are) will still be independent of visual representation. They might finally be sufficiently powerful AND simple that they *CAN* be used productively with graphical non-text-language representation...but they still will not *require* such a graphical representation.

That's why programming is still "stuck" in last century's text-based model: Because it's not actually stuck: It still has significant deal-winning benefits over newer developments. And that's because, even when "newer" does provide improvements, newer still isn't *inherently* superior on *all* counts. That's a fact of life that is easily, and frequently, forgotten in fast-moving domains.

Ironically, you're taking a way too theoretical approach to this. ;) Simply think of the basic advances a graphical debugger like the one in Visual Studio provides and advance that out several levels.

For example, one visualization I was talking about on IRC a decade ago and which I still haven't seen anybody doing, though I haven't really searched for it, is to have a high-level graphical visualization of the data flowing through a program. Just as dmd/ldc generate timing profile data for D functions by instrumenting the function call timings, you could instrument the function parameter data too (you're not using globals much, right? ;) ) and then record and save the data stream generated by some acceptance testing. Then, you periodically run those automated acceptance tests and look at the data stream differences as a color-coded flow visualization through the functions, with the data that stays the same shown as green, whereas the data that changed between different versions of the software as red. Think of something like a buildbot console, but where you could zoom in on different colors till you see the actual data stream:

https://ci.chromium.org/p/chromium/g/main/console

You'd then verify that the data differences are what you intend- for example, if you refactored a function to change what parameters it accepts, the data differences for the same external user input may be valid- and either accept the new data stream as the baseline or make changes till it's okay. If you refactor your code a lot, you could waste time with a lot of useless churn, but that's the same problem unit or other tests have with refactoring.

This high-level approach would benefit most average software much more than unit testing, as you usually don't care about individidual components or hitting all their edge cases. That's why most software doesn't use unit tests in the first place.

Note that I'm not saying unit tests are not worthwhile, particularly for libraries, only that realistically it'd be easier to get programmers to use this high-level view I'm outlining than writing a ton of unit tests, much less effort too. Ideally, you do both and they complement each other, along with integration testing and the rest.

In other words, we don't have to get rid of text representations altogether: there's a lot of scope for much better graphical visualizations of the textual data or code we're using now. You could do everything I just described by modifying the compiler to instrument your functions, dumping log files, and running them through diff, but that's the kind of low-level approach we're stuck with now. Ideally, we'd realize that _certain workflows are so common that we need good graphical interfaces for them_ and standardize on those, but I think Everlast's point is that's happening much slower than it should, which I agree with.

On Sunday, 2 September 2018 at 21:19:38 UTC, Ola Fosheim Grøstad wrote:
So there are a lot of dysfunctional aspects at the very foundation of software development processes in many real world businesses.

I wouldn't expect anything great to come out of this...

One of the root causes of that dysfunction is there's way too much software written. Open source has actually helped alleviate this, because instead of every embedded or server developer who needs an OS kernel convincing management that they should write their own, they now have a hard time justifying it when a free, OSS kernel like linux is out there, which is why so many of those places use linux now. Of course, you'd often like to modify the kernel and linux may not be stripped down or modular enough for some, but there's always other OSS kernels like Minix or Contiki for them.

Software is still in the early stages like the automative industry in the US, when there were hundreds of car manufacturers, most of them producing as low-quality product as software companies now:

https://en.m.wikipedia.org/wiki/Automotive_industry_in_the_United_States

Rather than whittling down to three large manufacturers like the US car industry did, open source provides a way for thousands of software outfits, individual devs even, to work on commonly used code as open source, while still being able to specialize when needed, particularly with permissive licenses. So, much of that dysfunction will be solved by consolidation, but a completely different kind of consolidation than was done for cars, because software is completely different than physical goods.

Reply via email to