On Sat, Sep 21, 2013 at 12:29 PM, Matt McLelland
<[email protected]>wrote:

> > An image could be interpreted as a high level world-map to support
> procedural generation with colors indicating terrain types and heights.
>
> This is common practice in games, but it doesn't IMO make artists into
> programmers and it doesn't make the image into a program.
>

Not by itself, I agree. Just like one hair on the chin doesn't make a
beard, or one telephone doesn't make a social network.

But scale it up! One artist will eventually have dozens or hundreds of
data-objects representing different activities and interacting. In a
carefully designed environment, the relationships between these objects
also become accessible for observation, influence, and extension.

The only practical difference between what you're calling an 'artist' vs.
'programmer' is scale. And, really, it's your vision of an artist's role
that's failing to scale, not the artist's vision.  Artists are certainly
prepared to act as programmers if it means freedom to do their work (cf.
Unreal Kismet, or vvvv, for example). But they have this important
requirement that is not well addressed by most languages today: immediate
feedback, concreteness.

A team of artists can easily build systems with tens of thousands of
interactions, at which point they'll face all the problems a team of
programmers do. It is essential that they have better tools to modularize,
visualize, understand, and address these problems than do programmers
today.


>
> I think there is a useful distinction between user and programmer that
> should be maintained.
>

I think there should be a fuzzy continuum, no clear distinction. Sometimes
artists are more involved with concrete direct manipulations, sometimes
more involved with reuse or tooling, with smooth transitions between one
role and the other. No great gaps or barriers.

Do you have any convincing arguments for maintaining a clear distinction?
What precisely is useful about it?



How can you view playing a game of Quake as programming? what's to be
> gained?
>

Quake is a game with very simple and immutable mechanics. The act of
playing Quake does not alter the Quake world in any interesting ways.
Therefore, we would not develop a very interesting artifact-layer program.
 There would, however, be an implicit program developed by the act of
playing Quake: navigation, aiming, shooting. This implicit program would at
least be useful for developing action-scripts and Quake-bots so you can
cheat your way to the top. (If you aren't cheating, you aren't trying. :)

If you had a more mutable game world - e.g. Minecraft, Lemmings, Little Big
Planet 2, or even Pokemon
Yellow<http://aurellem.org/vba-clojure/html/total-control.html> -
then there is much more to gain by comprehending playing as programming,
since you can model interesting systems. The same is true for games
involving a lot of micromanagement: tower defense, city simulators,
real-time tactics and strategy. You could shift easily from micromanagement
to 'programming' higher level strategies.

Further, I believe there are many, many games we haven't been able to
implement effectively: real-time dungeon-mastering for D&D-like games, for
example, and the sort of live story-play children tend to perform -
changing the rules on-the-fly while swishing and swooping with dolls and
dinosaurs. There are whole classes of games we can't easily imagine today
because the tools for realizing them are awful and inaccessible to those
with the vision.

To comprehend user interaction as programming opens opportunities even for
games.

Of course, if you just want to play, you can do that.


>
> I find myself agreeing with most of your intermediate reasoning and then
> failing to understand the jump to the conclusion of tactic concatenative
> programming and the appeal of viewing user interfaces as programs.
>

Tacit concatenative makes it all work smoothly.

TC is very effective for:
* automatic visualization and animation
* streaming programs
* pattern detection (simple matching)
* simple rewrite rules
* search-based code generation
* Markov model predictions (user anticipation)
* genetic programming and tuning
* typesafe dataflow for linear or modal

Individually, each of these may look like an incremental improvement that
could be achieved without TC.

You CAN get automatic visualization and animation with names, it's just
more difficult (no clear move vs. copy, and values held by names don't have
a clear location other than the text). You CAN do pattern recognition and
rewriting with names, it's just more difficult (TC can easily use regular
expressions). You CAN analyze for linear safety using names, it's just more
difficult (need to track names and scopes). You CAN predict actions using
names, it's just more difficult (machine-learning, Markov models, etc. are
very syntax/structure oriented). You CAN search logically for applicative
code or use genetic programming, it's just freakishly more difficult (a lot
more invalid or irrelevant syntax to search). You CAN stream applicative
code, it's just more difficult (dealing with scopes, namespaces).

But every little point, every little bit of complexity, adds up, pushing
the system beyond viable accessibility and usability thresholds.

Further, these aren't "little" points, and TC is not just "marginally" more
effective. Visualization and animation are extremely important. Predicting
and anticipating user actions is highly valuable. Code extraction from
history, programming by example, then tuning and optimizing this code from
history are essential. Streaming commands is the very foundation.

Stop cherry-picking your arguments; you've lost sight of the bigger
picture, or maybe you haven't glimpsed it yet. Step back. Try to address
ALL these points, simultaneously, in one system, while *keeping it simple*.
If you can do so with a named applicative model, I'll be impressed and
interested.


> I will occasionally have to give you an error message "usage of name is
> illegal in this context", right?   For example, violates substructural
> types.  I still count that as an easy translation
>

Under your proposal, the safety property is no longer compositional, no
longer correct-by-construction (i.e. requiring only syntactically local
analysis to validate); it now requires a non-local post-hoc analysis (not
an easy one, if you do any sort of inference). And while this might not
seem important for the concerns you've been tracking so far, I ask you to
review how this might affect streaming, local rewrites, and similar.

You call it an 'easy' translation. I call it a 'lossy' translation.

Or perhaps a more fitting phrase is: trying to put the toothpaste back in
the tube.


>
> most of your ideas sound pretty good to me, but I think there are a couple
> of sticking points that I'm still not on board with.  I'm certainly open to
> the possibility that I just haven't gotten it yet, and either way I wish
> you the best of luck in getting your system going.
>

Thanks. I imagine most people would be less open, more dismissive, and I
appreciate how you've engaged me on this so far.

Warm Regards,

Dave
_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to