I have to delete some quoted text to make this manageable.

On 9/2/2018 5:07 PM, Nick Sabalausky (Abscissa) wrote:
> [...]
> GUI programming has been attempted a lot. (See Scratch for one of the
> latest, possibly most successful attempts). But there are real,
> practical reasons it's never made significant in-roads (yet).
>
> There are really two main, but largely independent, aspects to what
> you're describing: Visual representation, and physical interface:
>
> A. Visual representation:
> -------------------------
>
> By visual representation, I mean "some kind of text, or UML-ish
> diagrams, or 3D environment, etc".
>
> What's important to keep in mind here is: The *fundamental concepts*
> involved in programming are inherently abstract, and thus equally
> applicable to whatever visual representation is used.

> If you're going to make a diagram-based or VR-based programming tool, it
> will still be using the same fundamental concepts that are already
> established in text-based programming: Imperative loops, conditionals
> and variables. Functional/declarative immutability, purity and
> high-order funcs. Encapsulation. Pipelines (like ranges). Etc. And
> indeed, all GUI based programming tools have worked this way. Because
> how *else* are they going to work?>
> If what you're really looking for is something that replaces or
> transcends all of those existing, fundamental programming concepts, then
> what you're *really* looking for is a new fundamental programming
> concept, not a visual representation. And ance you DO invent a new
> fundamental programming concept, being abstract, it will again be
> applicable to a variety of possible visual representations.

Well, there are quite a few programming approaches that bypass the
concepts you've listed. For example, production (rule-based) systems and
agent-oriented programming. I've became interested in stuff like this
recently, because it looks like a legitimate way out of the mess we're
in. Among other things, I found this really interesting Ph.D. thesis
about system called LiveWorld:

http://alumni.media.mit.edu/~mt/thesis/mt-thesis.html

Interesting stuff. I believe it would work very well in VR, if
visualized properly.

> That said, it is true some concepts may be more readily amenable to
> certain visual representations than others. But, at least for all the
> currently-known concepts, any combination of concept and representation
> can certainly be made to work.
>
> B. Physical interface:
> ----------------------
>
> By this I mean both actual input devices (keyboards, controllers,
> pointing devices) and also the mappings from their affordances (ie, what
> you can do with them: push button x, tilt stick's axis Y, point, move,
> rotate...) to specific actions taken on the visual representation
> (navigate, modify, etc.)
>
> The mappings, of course, tend to be highly dependant on the visual
> representation (although, theoretically, they don't strictly HAVE to
> be). The devices themselves, less so: For example, many of us use a
> pointing device to help us navigate text. Meanwhile, 3D
> modelers/animators find it's MUCH more efficient to deal with their 3D
> models and environments by including heavy use of the keyboard in their
> workflow instead of *just* a mouse and/or wacom alone.

That depends on the the editor design. Wings 3D
(http://www.wings3d.com), for example, uses mouse for most operations.
It's done well and it's much easier to get started with than something
like Blender (which I personally hate). Designers use Wings 3D for
serious work and the interface doesn't seem to become a limitation even
for advanced use cases.

> An important point here, is that using a keyboard has a tendency to be
> much more efficient for a much wider range of interactions than, say, a
> pointing device, like a mouse or touchscreen. There are some things a
> mouse or touchscreen is better at (ie, pointing and learning curve), but
> even on a touchscreen, pointing takes more time than pushing a button
> and is somewhat less composable with additional actions than, again,
> pushing/holding a key on a keyboard.>
> This means that while pointing, and indeed, direct manipulation in
> general, can be very beneficial in an interface, placing too much
> reliance on it will actually make the user LESS productive.

I don't believe this is necessarily true. It's just that programmers and
designers today are really bad at utilizing the mouse. Most of them
aren't even aware of how the device came to be. They have no idea about
NLS or Doug Engelbart's research.

http://www.dougengelbart.org/firsts/mouse.html

They've never looked at subsequent research by Xerox and Apple.

https://www.youtube.com/watch?v=Cn4vC80Pv6Q

That last video blew my mind when I saw it. Partly because it was the
first time I realized that the five most common UI operations (cut,
copy, paste, undo, redo) have no dedicated keys on the keyboard today,
while similar operation on Xerox Star did. Partly because I finally
understood the underlying idea behind icon-based UIs and realized it's
almost entirely forgotten now. It all ties together. Icons represent
objects. Objects interact through messages. Mouse and command keys allow
you to direct those interactions.

There were other reasons too. That one video is a treasure-trove of
forgotten good ideas.

> The result:
> -----------
>
> For programming to transcend the current text/language model, *without*
> harming either productivity or programming power (as all attempts so far
> have done), we will first need to invent entirely new high-level
> concepts which are simultaneously both simple/high-level enough AND
> powerful enough to obsolete most of the nitty-gritty lower-level
> concepts we programmers still need to deal with on a regular basis.

That's one other thing worth thinking about. Are we dealing with the
right concepts in the first place? Most of my time as a programmer was
spent integrating badly designed systems. Is that actually necessary? I
don't think so. It's busy-work created by developers for developers.
Maybe better tooling would free up all that time to deal with real-life
problems.

There is this VR game called Fantastic Contraption. Its interface is
light-years ahead of anything else I've seen in VR. The point of the
game is to design animated 3D structures that solve the problem of
traversing various obstacles while moving from point A to point B. Is
that not "real" programming? You make a structure to interact with an
environment to solve a problem.

> And once we do that, those new super-programming concepts (being the
> abstract concepts that they inherently are) will still be independent of
> visual representation. They might finally be sufficiently powerful AND
> simple that they *CAN* be used productively with graphical
> non-text-language representation...but they still will not *require*
> such a graphical representation.
>
> That's why programming is still "stuck" in last century's text-based
> model: Because it's not actually stuck: It still has significant
> deal-winning benefits over newer developments. And that's because, even
> when "newer" does provide improvements, newer still isn't *inherently*
> superior on *all* counts. That's a fact of life that is easily, and
> frequently, forgotten in fast-moving domains.

Have you seen any of Bret Viktor's talks? He addresses a lot of these
points.

https://vimeo.com/71278954
https://vimeo.com/64895205
https://vimeo.com/97903574

Reply via email to