Re: [fonc] Programmer Models integrating Program and IDE

2013-08-29 Thread David Barbour
On Wed, Aug 28, 2013 at 5:57 PM, John Carlson yottz...@gmail.com wrote:

 Multi-threaded Object-Oriented Stack Environment ... MOOSE for short.


Would you mind pointing me to some documentation? I found your document on
A Visual Language for Data Mapping but it doesn't discuss MOOSE. From the
intro thread, my best guess is that you added objects to and arrays to your
RPN language? But I'm not sure how the multi-threading is involved.


 Also check out VIPR from Wayne Citrin and friends at UC Boulder. Also
 check out AgentSheets, AgentCubes and XMLisp while you are at it.  Not far
 from SimCity and friends. Also looking at videos from unreal kismet may be
 helpful if you haven't already seen them.


I've now checked these out.  I am curious what led you to recommend them.

To clarify, my interest in visual programming is about finding a way to
unify HCI with programming and vice versa. To make the 'programmer-model' a
formal part of the 'program' is, I now believe, the most promising step in
that direction after live programming. As I described (but did not clarify)
this enables the IDE to be very thin, primarily a way of rendering a
program and extending it. The bulk of the logic of the IDE, potentially
even the menu systems, is shifted into the program itself.

(While I am interested in game development, my mention of it was intended
more as a declaration of expressiveness than a purpose.)

Croquet - with its pervasively hackable user environment - is much closer
to what I'm looking for than AgentCubes. But even Croquet still has a
strong separation between 'interacting with objects' and 'programming'.

Other impressions:

VIPR - Visual Imperative PRogramming - seems to be exploring visual
representations. I was confused that they did not address acquisition or
assignment of data - those would be the most important edges in data-flow
systems. But I guess VIPR is more a control-flow model than a data-flow.
One good point. made repeatedly in the VIPR papers is that we need to avoid
edges because they create complexity that is difficult to comprehend,
especially as we zoom away from the graph.

I do like that Kismet is making reactive computation accessible and useful
to a couple million people.




 I think you should replace stack with collection


I could model a number of different collections, within the limit that it
be constructed of products (pairs) to fit the
arrowizedhttp://en.wikipedia.org/wiki/Arrow_(computer_science)semantics.
So far I've modeled:

* one stack (operate only near top - take, put, roll; no navigation)
* list zippers (navigational interface in one dimension: stepLeft,
stepRight)
* tree zipper (two-dimensional navigation in a tree; up, down, left, right)
* list zipper of stacks (stepLeft, stepRight, take, put, roll)
* named stacks via metaprogramming (ad-hoc navigation: foo goto)

The tree-zipper is the most expressive I can achieve without
metaprogramming.

The more expressive collections, however, are not necessarily good. After
building the tree zipper, I couldn't figure out how I wanted to use it.
Same for the list zipper, though the 'hand' concept serves a similar role
(take and put instead of stepLeft and stepRight). For a list of anonymous
stacks: I tend to stick around on one stack for a while, and forget the
relative positions of other stacks. That's why I eventually went for named
stacks.



 Have you considered controlling stacks, program counters and iterators
 from the same basic metaphor? We used recorder buttons. Forward, Reverse,
 Stop, Fast Forward, and Fast Reverse.  Then undo (delete previous
 operation) and delete next operation. [..] You'd probably want to add copy
 and paste as well. [..] Along with the recorder metaphor we added
 breakpoints which worked travelling in either direction in the code.


My language doesn't have runtime stacks, program counters, or iterators.
But I've mentioned viewing and animating parts of the compile-time history.




I know you can make a recipe maker with a recipe,  but who decides what a
 recipe makes?


Another recipe maker; you need to bootstrap.



Can you make more than one type of thing at the same time?  Can a human
 make more than one type of thing at the same time?  Or a robot?


Living humans are always making more than one type of thing at a time. I
mean, unless you discount 'perspiration' and 'CO2' and 'heat' and 'sound'
and a bunch of other products I don't care to mention. I imagine the same
could be said for robots.  ;)

Humans can do a lot once it's shifted into their subconscious thoughts. But
their eye focus is about the size of a dime at arms length, and they aren't
very good at consciously focusing on more than one problem at a time.
Robots, however, are only limited by their mobility, sensors, actuators,
processors, programming, and resources. Okay... that's a lot of limits. But
if you had the funds and the time and the skills, you could build a robot
that can make more than one thing at a time.

[fonc] Programmer Models integrating Program and IDE

2013-08-28 Thread David Barbour
I understand 'user modeling' [1] to broadly address long-term details (e.g.
user preferences and settings), mid-term details (goals, tasks, workflow),
and short-term details (focus, attention, clipboards and cursors,
conversational context, history). The unifying principle is that we have
more context to make smart decisions, to make systems behave in ways their
users expect. This is a form of context sensitivity, where the user is
explicitly part of the context.

Programming can be understood as a form of user interface. But,
historically, user modeling (in this case 'programmer modeling') has been
kept carefully separate from the program itself; instead, it is instead
part of an Integrated Development Environment (IDE)

*Hypothesis:* the separation of user-model from program has hindered both
programmers and the art of programming. There are several reasons for this:

1) Our IDEs are not sufficiently smart. The context IDEs keep is heuristic,
fragile, and can be trusted with only the simplest of tasks.
2) Poor integration with the IDE and visual environments: it is difficult
to assign formal meaning to gestures and programmer actions.
3) Programmer-layer goals, tasks, and workflows are generally opaque to the
IDE, the programs and the type system.
4) Our code must be explicit and verbose about many interactions that could
be implicit if we tracked user context.
5) Programmers cannot easily adjust their environment or language to know
what they mean, and act as they expect.

I believe we can do much better. I'll next provide a little background
about how this belief came to be, then what I'm envisioning.

*Background*

Recently, I started developing a tacit representation for an arrowized
reactive programming model. Arrows provide a relatively rigid 'structure'
to the program. In the tacit representation, this structure was represented
as a stack consisting of a mix of compile-time values (text, numbers,
blocks) and runtime signals (e.g. mouse position). Essentially, I can give
the stack a 'static type', but I still used FORTH-like idioms to roll and
pick items from the stack as though it were a dynamic structure. With
just a little static introspection, I could even model `7 pick` as copying
the seventh element of the stack to the top of the stack.

But I didn't like this single stack environment. It felt cramped.

Often, I desire to decompose a problem into multiple concurrent tasks or
workflows. And when I do so, I must occasionally integrate intermediate
results, which can involve some complex scattering and gathering
operations. On a single stack, this integration is terribly painful: it
involves rolling or copying intermediate signals and values upwards or
downwards, with relative positions that are difficult to remember.
Conclusion: a single stack is good only for a single, sequential task - a
single pipeline, in a dataflow model.

But then I realized: I'm not limited to modeling a stack. A stack is just
one possible way of organizing and conceptualizing the 'type' of the arrow.
I can model any environment I please! (I'm being serious. With the same
level of introspection needed for `7 pick`, I could model a MUD, MOO, or
interactive fiction in the type system.) After experimenting with tree
zippers [2] or a list of anonymous stacks [3], I'm kind of (hopefully)
settling on an easy-to-use environment [4] that consists of:

* current stack
* hand
* current stack name
* list of named stacks

The current stack serves the traditional role. The 'hand' enables
developers to 'take' and 'put' objects (and juggle a few of them, like
'roll' except for the hand) - it's really convenient even for operating on
a single stack, and also helps carry items between stacks (implicit data
plumbing). The list of named stacks is achieved using compile-time
introspection (~type matching for different stack names) and is very
flexible:

* different stacks for different tasks; ability to navigate to a different
stack (goto)
* programmers can 'load' and 'store' from a stack remotely (treat it like a
variable or register)
* programmers can use named stacks to record preferences and configuration
options
* programmers can use named stacks to store dynamic libraries of code (as
blocks)

As I developed this rich environment, it occurred to me that I had
essentially integrated a user-model with the program itself. Actually, my
first thought was closer to hey, I'm modeling a character in a game! Go go
Data Plumber! The programmer is manipulating an avatar, navigating from
task to task and stack to stack. The programmer has items in hand, plus a
potential inventory (e.g. an inventory stack). To push metaphors a bit: I
can model keyrings full of sealer/unsealer pairs, locked rooms with sealed
values, unique 'artifacts' and 'puzzles' in the form of affine and relevant
types [5], quests goals in the form of  fractional types (representing
futures/promises) [6], and 'spellbooks' in the form of static capabilities
[7]. But in 

Re: [fonc] Programmer Models integrating Program and IDE

2013-08-28 Thread John Carlson
Multi-threaded Object-Oriented Stack Environment ... MOOSE for short.

Also check out VIPR from Wayne Citrin and friends at UC Boulder.  Also
check out AgentSheets, AgentCubes and XMLisp while you are at it.  Not far
from SimCity and friends.  Also looking at videos from unreal kismet may be
helpful if you haven't already seen them.

I think we're moving towards automated game design though, which will
become the next platform.

Good luck!
On Aug 28, 2013 5:36 PM, David Barbour dmbarb...@gmail.com wrote:

 I understand 'user modeling' [1] to broadly address long-term details
 (e.g. user preferences and settings), mid-term details (goals, tasks,
 workflow), and short-term details (focus, attention, clipboards and
 cursors, conversational context, history). The unifying principle is that
 we have more context to make smart decisions, to make systems behave in
 ways their users expect. This is a form of context sensitivity, where the
 user is explicitly part of the context.

 Programming can be understood as a form of user interface. But,
 historically, user modeling (in this case 'programmer modeling') has been
 kept carefully separate from the program itself; instead, it is instead
 part of an Integrated Development Environment (IDE)

 *Hypothesis:* the separation of user-model from program has hindered both
 programmers and the art of programming. There are several reasons for this:

 1) Our IDEs are not sufficiently smart. The context IDEs keep is
 heuristic, fragile, and can be trusted with only the simplest of tasks.
 2) Poor integration with the IDE and visual environments: it is difficult
 to assign formal meaning to gestures and programmer actions.
 3) Programmer-layer goals, tasks, and workflows are generally opaque to
 the IDE, the programs and the type system.
 4) Our code must be explicit and verbose about many interactions that
 could be implicit if we tracked user context.
 5) Programmers cannot easily adjust their environment or language to know
 what they mean, and act as they expect.

 I believe we can do much better. I'll next provide a little background
 about how this belief came to be, then what I'm envisioning.

 *Background*

 Recently, I started developing a tacit representation for an arrowized
 reactive programming model. Arrows provide a relatively rigid 'structure'
 to the program. In the tacit representation, this structure was represented
 as a stack consisting of a mix of compile-time values (text, numbers,
 blocks) and runtime signals (e.g. mouse position). Essentially, I can give
 the stack a 'static type', but I still used FORTH-like idioms to roll and
 pick items from the stack as though it were a dynamic structure. With
 just a little static introspection, I could even model `7 pick` as copying
 the seventh element of the stack to the top of the stack.

 But I didn't like this single stack environment. It felt cramped.

 Often, I desire to decompose a problem into multiple concurrent tasks or
 workflows. And when I do so, I must occasionally integrate intermediate
 results, which can involve some complex scattering and gathering
 operations. On a single stack, this integration is terribly painful: it
 involves rolling or copying intermediate signals and values upwards or
 downwards, with relative positions that are difficult to remember.
 Conclusion: a single stack is good only for a single, sequential task - a
 single pipeline, in a dataflow model.

 But then I realized: I'm not limited to modeling a stack. A stack is just
 one possible way of organizing and conceptualizing the 'type' of the arrow.
 I can model any environment I please! (I'm being serious. With the same
 level of introspection needed for `7 pick`, I could model a MUD, MOO, or
 interactive fiction in the type system.) After experimenting with tree
 zippers [2] or a list of anonymous stacks [3], I'm kind of (hopefully)
 settling on an easy-to-use environment [4] that consists of:

 * current stack
 * hand
 * current stack name
 * list of named stacks

 The current stack serves the traditional role. The 'hand' enables
 developers to 'take' and 'put' objects (and juggle a few of them, like
 'roll' except for the hand) - it's really convenient even for operating on
 a single stack, and also helps carry items between stacks (implicit data
 plumbing). The list of named stacks is achieved using compile-time
 introspection (~type matching for different stack names) and is very
 flexible:

 * different stacks for different tasks; ability to navigate to a different
 stack (goto)
 * programmers can 'load' and 'store' from a stack remotely (treat it like
 a variable or register)
 * programmers can use named stacks to record preferences and configuration
 options
 * programmers can use named stacks to store dynamic libraries of code (as
 blocks)

 As I developed this rich environment, it occurred to me that I had
 essentially integrated a user-model with the program itself. Actually, my
 first thought was closer to 

Re: [fonc] Programmer Models integrating Program and IDE

2013-08-28 Thread John Carlson
I think you should replace stack with collection, but perhaps that's too
javaesque.
On Aug 28, 2013 5:36 PM, David Barbour dmbarb...@gmail.com wrote:

 I understand 'user modeling' [1] to broadly address long-term details
 (e.g. user preferences and settings), mid-term details (goals, tasks,
 workflow), and short-term details (focus, attention, clipboards and
 cursors, conversational context, history). The unifying principle is that
 we have more context to make smart decisions, to make systems behave in
 ways their users expect. This is a form of context sensitivity, where the
 user is explicitly part of the context.

 Programming can be understood as a form of user interface. But,
 historically, user modeling (in this case 'programmer modeling') has been
 kept carefully separate from the program itself; instead, it is instead
 part of an Integrated Development Environment (IDE)

 *Hypothesis:* the separation of user-model from program has hindered both
 programmers and the art of programming. There are several reasons for this:

 1) Our IDEs are not sufficiently smart. The context IDEs keep is
 heuristic, fragile, and can be trusted with only the simplest of tasks.
 2) Poor integration with the IDE and visual environments: it is difficult
 to assign formal meaning to gestures and programmer actions.
 3) Programmer-layer goals, tasks, and workflows are generally opaque to
 the IDE, the programs and the type system.
 4) Our code must be explicit and verbose about many interactions that
 could be implicit if we tracked user context.
 5) Programmers cannot easily adjust their environment or language to know
 what they mean, and act as they expect.

 I believe we can do much better. I'll next provide a little background
 about how this belief came to be, then what I'm envisioning.

 *Background*

 Recently, I started developing a tacit representation for an arrowized
 reactive programming model. Arrows provide a relatively rigid 'structure'
 to the program. In the tacit representation, this structure was represented
 as a stack consisting of a mix of compile-time values (text, numbers,
 blocks) and runtime signals (e.g. mouse position). Essentially, I can give
 the stack a 'static type', but I still used FORTH-like idioms to roll and
 pick items from the stack as though it were a dynamic structure. With
 just a little static introspection, I could even model `7 pick` as copying
 the seventh element of the stack to the top of the stack.

 But I didn't like this single stack environment. It felt cramped.

 Often, I desire to decompose a problem into multiple concurrent tasks or
 workflows. And when I do so, I must occasionally integrate intermediate
 results, which can involve some complex scattering and gathering
 operations. On a single stack, this integration is terribly painful: it
 involves rolling or copying intermediate signals and values upwards or
 downwards, with relative positions that are difficult to remember.
 Conclusion: a single stack is good only for a single, sequential task - a
 single pipeline, in a dataflow model.

 But then I realized: I'm not limited to modeling a stack. A stack is just
 one possible way of organizing and conceptualizing the 'type' of the arrow.
 I can model any environment I please! (I'm being serious. With the same
 level of introspection needed for `7 pick`, I could model a MUD, MOO, or
 interactive fiction in the type system.) After experimenting with tree
 zippers [2] or a list of anonymous stacks [3], I'm kind of (hopefully)
 settling on an easy-to-use environment [4] that consists of:

 * current stack
 * hand
 * current stack name
 * list of named stacks

 The current stack serves the traditional role. The 'hand' enables
 developers to 'take' and 'put' objects (and juggle a few of them, like
 'roll' except for the hand) - it's really convenient even for operating on
 a single stack, and also helps carry items between stacks (implicit data
 plumbing). The list of named stacks is achieved using compile-time
 introspection (~type matching for different stack names) and is very
 flexible:

 * different stacks for different tasks; ability to navigate to a different
 stack (goto)
 * programmers can 'load' and 'store' from a stack remotely (treat it like
 a variable or register)
 * programmers can use named stacks to record preferences and configuration
 options
 * programmers can use named stacks to store dynamic libraries of code (as
 blocks)

 As I developed this rich environment, it occurred to me that I had
 essentially integrated a user-model with the program itself. Actually, my
 first thought was closer to hey, I'm modeling a character in a game! Go go
 Data Plumber! The programmer is manipulating an avatar, navigating from
 task to task and stack to stack. The programmer has items in hand, plus a
 potential inventory (e.g. an inventory stack). To push metaphors a bit: I
 can model keyrings full of sealer/unsealer pairs, locked rooms with sealed
 values, unique 

Re: [fonc] Programmer Models integrating Program and IDE

2013-08-28 Thread John Carlson
Have you considered controlling stacks, program counters and iterators from
the same basic metaphor?  We used recorder buttons.  Forward, Reverse,
Stop, Fast Forward, and Fast Reverse.  Then undo (delete previous
operation) and delete next operation.
On Aug 28, 2013 5:36 PM, David Barbour dmbarb...@gmail.com wrote:

 I understand 'user modeling' [1] to broadly address long-term details
 (e.g. user preferences and settings), mid-term details (goals, tasks,
 workflow), and short-term details (focus, attention, clipboards and
 cursors, conversational context, history). The unifying principle is that
 we have more context to make smart decisions, to make systems behave in
 ways their users expect. This is a form of context sensitivity, where the
 user is explicitly part of the context.

 Programming can be understood as a form of user interface. But,
 historically, user modeling (in this case 'programmer modeling') has been
 kept carefully separate from the program itself; instead, it is instead
 part of an Integrated Development Environment (IDE)

 *Hypothesis:* the separation of user-model from program has hindered both
 programmers and the art of programming. There are several reasons for this:

 1) Our IDEs are not sufficiently smart. The context IDEs keep is
 heuristic, fragile, and can be trusted with only the simplest of tasks.
 2) Poor integration with the IDE and visual environments: it is difficult
 to assign formal meaning to gestures and programmer actions.
 3) Programmer-layer goals, tasks, and workflows are generally opaque to
 the IDE, the programs and the type system.
 4) Our code must be explicit and verbose about many interactions that
 could be implicit if we tracked user context.
 5) Programmers cannot easily adjust their environment or language to know
 what they mean, and act as they expect.

 I believe we can do much better. I'll next provide a little background
 about how this belief came to be, then what I'm envisioning.

 *Background*

 Recently, I started developing a tacit representation for an arrowized
 reactive programming model. Arrows provide a relatively rigid 'structure'
 to the program. In the tacit representation, this structure was represented
 as a stack consisting of a mix of compile-time values (text, numbers,
 blocks) and runtime signals (e.g. mouse position). Essentially, I can give
 the stack a 'static type', but I still used FORTH-like idioms to roll and
 pick items from the stack as though it were a dynamic structure. With
 just a little static introspection, I could even model `7 pick` as copying
 the seventh element of the stack to the top of the stack.

 But I didn't like this single stack environment. It felt cramped.

 Often, I desire to decompose a problem into multiple concurrent tasks or
 workflows. And when I do so, I must occasionally integrate intermediate
 results, which can involve some complex scattering and gathering
 operations. On a single stack, this integration is terribly painful: it
 involves rolling or copying intermediate signals and values upwards or
 downwards, with relative positions that are difficult to remember.
 Conclusion: a single stack is good only for a single, sequential task - a
 single pipeline, in a dataflow model.

 But then I realized: I'm not limited to modeling a stack. A stack is just
 one possible way of organizing and conceptualizing the 'type' of the arrow.
 I can model any environment I please! (I'm being serious. With the same
 level of introspection needed for `7 pick`, I could model a MUD, MOO, or
 interactive fiction in the type system.) After experimenting with tree
 zippers [2] or a list of anonymous stacks [3], I'm kind of (hopefully)
 settling on an easy-to-use environment [4] that consists of:

 * current stack
 * hand
 * current stack name
 * list of named stacks

 The current stack serves the traditional role. The 'hand' enables
 developers to 'take' and 'put' objects (and juggle a few of them, like
 'roll' except for the hand) - it's really convenient even for operating on
 a single stack, and also helps carry items between stacks (implicit data
 plumbing). The list of named stacks is achieved using compile-time
 introspection (~type matching for different stack names) and is very
 flexible:

 * different stacks for different tasks; ability to navigate to a different
 stack (goto)
 * programmers can 'load' and 'store' from a stack remotely (treat it like
 a variable or register)
 * programmers can use named stacks to record preferences and configuration
 options
 * programmers can use named stacks to store dynamic libraries of code (as
 blocks)

 As I developed this rich environment, it occurred to me that I had
 essentially integrated a user-model with the program itself. Actually, my
 first thought was closer to hey, I'm modeling a character in a game! Go go
 Data Plumber! The programmer is manipulating an avatar, navigating from
 task to task and stack to stack. The programmer has items in hand, plus a
 

Re: [fonc] Programmer Models integrating Program and IDE

2013-08-28 Thread John Carlson
Along with the recorder metaphor we added breakpoints which worked
travelling in either direction in the code.
On Aug 28, 2013 8:39 PM, John Carlson yottz...@gmail.com wrote:

 You'd probably want to add copy and paste as well.
 On Aug 28, 2013 8:29 PM, John Carlson yottz...@gmail.com wrote:

 Have you considered controlling stacks, program counters and iterators
 from the same basic metaphor?  We used recorder buttons.  Forward, Reverse,
 Stop, Fast Forward, and Fast Reverse.  Then undo (delete previous
 operation) and delete next operation.
 On Aug 28, 2013 5:36 PM, David Barbour dmbarb...@gmail.com wrote:

 I understand 'user modeling' [1] to broadly address long-term details
 (e.g. user preferences and settings), mid-term details (goals, tasks,
 workflow), and short-term details (focus, attention, clipboards and
 cursors, conversational context, history). The unifying principle is that
 we have more context to make smart decisions, to make systems behave in
 ways their users expect. This is a form of context sensitivity, where the
 user is explicitly part of the context.

 Programming can be understood as a form of user interface. But,
 historically, user modeling (in this case 'programmer modeling') has been
 kept carefully separate from the program itself; instead, it is instead
 part of an Integrated Development Environment (IDE)

 *Hypothesis:* the separation of user-model from program has hindered
 both programmers and the art of programming. There are several reasons for
 this:

 1) Our IDEs are not sufficiently smart. The context IDEs keep is
 heuristic, fragile, and can be trusted with only the simplest of tasks.
 2) Poor integration with the IDE and visual environments: it is
 difficult to assign formal meaning to gestures and programmer actions.
 3) Programmer-layer goals, tasks, and workflows are generally opaque to
 the IDE, the programs and the type system.
 4) Our code must be explicit and verbose about many interactions that
 could be implicit if we tracked user context.
 5) Programmers cannot easily adjust their environment or language to
 know what they mean, and act as they expect.

 I believe we can do much better. I'll next provide a little background
 about how this belief came to be, then what I'm envisioning.

 *Background*

 Recently, I started developing a tacit representation for an arrowized
 reactive programming model. Arrows provide a relatively rigid 'structure'
 to the program. In the tacit representation, this structure was represented
 as a stack consisting of a mix of compile-time values (text, numbers,
 blocks) and runtime signals (e.g. mouse position). Essentially, I can give
 the stack a 'static type', but I still used FORTH-like idioms to roll and
 pick items from the stack as though it were a dynamic structure. With
 just a little static introspection, I could even model `7 pick` as copying
 the seventh element of the stack to the top of the stack.

 But I didn't like this single stack environment. It felt cramped.

 Often, I desire to decompose a problem into multiple concurrent tasks or
 workflows. And when I do so, I must occasionally integrate intermediate
 results, which can involve some complex scattering and gathering
 operations. On a single stack, this integration is terribly painful: it
 involves rolling or copying intermediate signals and values upwards or
 downwards, with relative positions that are difficult to remember.
 Conclusion: a single stack is good only for a single, sequential task - a
 single pipeline, in a dataflow model.

 But then I realized: I'm not limited to modeling a stack. A stack is
 just one possible way of organizing and conceptualizing the 'type' of the
 arrow. I can model any environment I please! (I'm being serious. With the
 same level of introspection needed for `7 pick`, I could model a MUD, MOO,
 or interactive fiction in the type system.) After experimenting with tree
 zippers [2] or a list of anonymous stacks [3], I'm kind of (hopefully)
 settling on an easy-to-use environment [4] that consists of:

 * current stack
 * hand
 * current stack name
 * list of named stacks

 The current stack serves the traditional role. The 'hand' enables
 developers to 'take' and 'put' objects (and juggle a few of them, like
 'roll' except for the hand) - it's really convenient even for operating on
 a single stack, and also helps carry items between stacks (implicit data
 plumbing). The list of named stacks is achieved using compile-time
 introspection (~type matching for different stack names) and is very
 flexible:

 * different stacks for different tasks; ability to navigate to a
 different stack (goto)
 * programmers can 'load' and 'store' from a stack remotely (treat it
 like a variable or register)
 * programmers can use named stacks to record preferences and
 configuration options
 * programmers can use named stacks to store dynamic libraries of code
 (as blocks)

 As I developed this rich environment, it occurred to me that I had

Re: [fonc] Programmer Models integrating Program and IDE

2013-08-28 Thread John Carlson
We used static breakpoints and relative cursors.  Making the breakpoints
more dynamic would be an interesting research project.   We were able to
make cursors in text dependent on each other.
On Aug 28, 2013 8:58 PM, John Carlson yottz...@gmail.com wrote:

 Along with the recorder metaphor we added breakpoints which worked
 travelling in either direction in the code.
 On Aug 28, 2013 8:39 PM, John Carlson yottz...@gmail.com wrote:

 You'd probably want to add copy and paste as well.
 On Aug 28, 2013 8:29 PM, John Carlson yottz...@gmail.com wrote:

 Have you considered controlling stacks, program counters and iterators
 from the same basic metaphor?  We used recorder buttons.  Forward, Reverse,
 Stop, Fast Forward, and Fast Reverse.  Then undo (delete previous
 operation) and delete next operation.
 On Aug 28, 2013 5:36 PM, David Barbour dmbarb...@gmail.com wrote:

 I understand 'user modeling' [1] to broadly address long-term details
 (e.g. user preferences and settings), mid-term details (goals, tasks,
 workflow), and short-term details (focus, attention, clipboards and
 cursors, conversational context, history). The unifying principle is that
 we have more context to make smart decisions, to make systems behave in
 ways their users expect. This is a form of context sensitivity, where the
 user is explicitly part of the context.

 Programming can be understood as a form of user interface. But,
 historically, user modeling (in this case 'programmer modeling') has been
 kept carefully separate from the program itself; instead, it is instead
 part of an Integrated Development Environment (IDE)

 *Hypothesis:* the separation of user-model from program has hindered
 both programmers and the art of programming. There are several reasons for
 this:

 1) Our IDEs are not sufficiently smart. The context IDEs keep is
 heuristic, fragile, and can be trusted with only the simplest of tasks.
 2) Poor integration with the IDE and visual environments: it is
 difficult to assign formal meaning to gestures and programmer actions.
 3) Programmer-layer goals, tasks, and workflows are generally opaque to
 the IDE, the programs and the type system.
 4) Our code must be explicit and verbose about many interactions that
 could be implicit if we tracked user context.
 5) Programmers cannot easily adjust their environment or language to
 know what they mean, and act as they expect.

 I believe we can do much better. I'll next provide a little background
 about how this belief came to be, then what I'm envisioning.

 *Background*

 Recently, I started developing a tacit representation for an arrowized
 reactive programming model. Arrows provide a relatively rigid 'structure'
 to the program. In the tacit representation, this structure was represented
 as a stack consisting of a mix of compile-time values (text, numbers,
 blocks) and runtime signals (e.g. mouse position). Essentially, I can give
 the stack a 'static type', but I still used FORTH-like idioms to roll and
 pick items from the stack as though it were a dynamic structure. With
 just a little static introspection, I could even model `7 pick` as copying
 the seventh element of the stack to the top of the stack.

 But I didn't like this single stack environment. It felt cramped.

 Often, I desire to decompose a problem into multiple concurrent tasks
 or workflows. And when I do so, I must occasionally integrate intermediate
 results, which can involve some complex scattering and gathering
 operations. On a single stack, this integration is terribly painful: it
 involves rolling or copying intermediate signals and values upwards or
 downwards, with relative positions that are difficult to remember.
 Conclusion: a single stack is good only for a single, sequential task - a
 single pipeline, in a dataflow model.

 But then I realized: I'm not limited to modeling a stack. A stack is
 just one possible way of organizing and conceptualizing the 'type' of the
 arrow. I can model any environment I please! (I'm being serious. With the
 same level of introspection needed for `7 pick`, I could model a MUD, MOO,
 or interactive fiction in the type system.) After experimenting with tree
 zippers [2] or a list of anonymous stacks [3], I'm kind of (hopefully)
 settling on an easy-to-use environment [4] that consists of:

 * current stack
 * hand
 * current stack name
 * list of named stacks

 The current stack serves the traditional role. The 'hand' enables
 developers to 'take' and 'put' objects (and juggle a few of them, like
 'roll' except for the hand) - it's really convenient even for operating on
 a single stack, and also helps carry items between stacks (implicit data
 plumbing). The list of named stacks is achieved using compile-time
 introspection (~type matching for different stack names) and is very
 flexible:

 * different stacks for different tasks; ability to navigate to a
 different stack (goto)
 * programmers can 'load' and 'store' from a stack remotely (treat it
 like a 

Re: [fonc] Programmer Models integrating Program and IDE

2013-08-28 Thread John Carlson
We also had the concepts of text region definition, which was persistent,
and text region, which was computed at runtime.  Same with text location
definition and text location.  That way, we could adapt to different
inputs.
On Aug 28, 2013 5:36 PM, David Barbour dmbarb...@gmail.com wrote:

 I understand 'user modeling' [1] to broadly address long-term details
 (e.g. user preferences and settings), mid-term details (goals, tasks,
 workflow), and short-term details (focus, attention, clipboards and
 cursors, conversational context, history). The unifying principle is that
 we have more context to make smart decisions, to make systems behave in
 ways their users expect. This is a form of context sensitivity, where the
 user is explicitly part of the context.

 Programming can be understood as a form of user interface. But,
 historically, user modeling (in this case 'programmer modeling') has been
 kept carefully separate from the program itself; instead, it is instead
 part of an Integrated Development Environment (IDE)

 *Hypothesis:* the separation of user-model from program has hindered both
 programmers and the art of programming. There are several reasons for this:

 1) Our IDEs are not sufficiently smart. The context IDEs keep is
 heuristic, fragile, and can be trusted with only the simplest of tasks.
 2) Poor integration with the IDE and visual environments: it is difficult
 to assign formal meaning to gestures and programmer actions.
 3) Programmer-layer goals, tasks, and workflows are generally opaque to
 the IDE, the programs and the type system.
 4) Our code must be explicit and verbose about many interactions that
 could be implicit if we tracked user context.
 5) Programmers cannot easily adjust their environment or language to know
 what they mean, and act as they expect.

 I believe we can do much better. I'll next provide a little background
 about how this belief came to be, then what I'm envisioning.

 *Background*

 Recently, I started developing a tacit representation for an arrowized
 reactive programming model. Arrows provide a relatively rigid 'structure'
 to the program. In the tacit representation, this structure was represented
 as a stack consisting of a mix of compile-time values (text, numbers,
 blocks) and runtime signals (e.g. mouse position). Essentially, I can give
 the stack a 'static type', but I still used FORTH-like idioms to roll and
 pick items from the stack as though it were a dynamic structure. With
 just a little static introspection, I could even model `7 pick` as copying
 the seventh element of the stack to the top of the stack.

 But I didn't like this single stack environment. It felt cramped.

 Often, I desire to decompose a problem into multiple concurrent tasks or
 workflows. And when I do so, I must occasionally integrate intermediate
 results, which can involve some complex scattering and gathering
 operations. On a single stack, this integration is terribly painful: it
 involves rolling or copying intermediate signals and values upwards or
 downwards, with relative positions that are difficult to remember.
 Conclusion: a single stack is good only for a single, sequential task - a
 single pipeline, in a dataflow model.

 But then I realized: I'm not limited to modeling a stack. A stack is just
 one possible way of organizing and conceptualizing the 'type' of the arrow.
 I can model any environment I please! (I'm being serious. With the same
 level of introspection needed for `7 pick`, I could model a MUD, MOO, or
 interactive fiction in the type system.) After experimenting with tree
 zippers [2] or a list of anonymous stacks [3], I'm kind of (hopefully)
 settling on an easy-to-use environment [4] that consists of:

 * current stack
 * hand
 * current stack name
 * list of named stacks

 The current stack serves the traditional role. The 'hand' enables
 developers to 'take' and 'put' objects (and juggle a few of them, like
 'roll' except for the hand) - it's really convenient even for operating on
 a single stack, and also helps carry items between stacks (implicit data
 plumbing). The list of named stacks is achieved using compile-time
 introspection (~type matching for different stack names) and is very
 flexible:

 * different stacks for different tasks; ability to navigate to a different
 stack (goto)
 * programmers can 'load' and 'store' from a stack remotely (treat it like
 a variable or register)
 * programmers can use named stacks to record preferences and configuration
 options
 * programmers can use named stacks to store dynamic libraries of code (as
 blocks)

 As I developed this rich environment, it occurred to me that I had
 essentially integrated a user-model with the program itself. Actually, my
 first thought was closer to hey, I'm modeling a character in a game! Go go
 Data Plumber! The programmer is manipulating an avatar, navigating from
 task to task and stack to stack. The programmer has items in hand, plus a
 potential inventory (e.g. 

Re: [fonc] Programmer Models integrating Program and IDE

2013-08-28 Thread John Carlson
These may be your details.
On Aug 28, 2013 9:25 PM, John Carlson yottz...@gmail.com wrote:

 We also had the concepts of text region definition, which was persistent,
 and text region, which was computed at runtime.  Same with text location
 definition and text location.  That way, we could adapt to different
 inputs.
 On Aug 28, 2013 5:36 PM, David Barbour dmbarb...@gmail.com wrote:

 I understand 'user modeling' [1] to broadly address long-term details
 (e.g. user preferences and settings), mid-term details (goals, tasks,
 workflow), and short-term details (focus, attention, clipboards and
 cursors, conversational context, history). The unifying principle is that
 we have more context to make smart decisions, to make systems behave in
 ways their users expect. This is a form of context sensitivity, where the
 user is explicitly part of the context.

 Programming can be understood as a form of user interface. But,
 historically, user modeling (in this case 'programmer modeling') has been
 kept carefully separate from the program itself; instead, it is instead
 part of an Integrated Development Environment (IDE)

 *Hypothesis:* the separation of user-model from program has hindered
 both programmers and the art of programming. There are several reasons for
 this:

 1) Our IDEs are not sufficiently smart. The context IDEs keep is
 heuristic, fragile, and can be trusted with only the simplest of tasks.
 2) Poor integration with the IDE and visual environments: it is difficult
 to assign formal meaning to gestures and programmer actions.
 3) Programmer-layer goals, tasks, and workflows are generally opaque to
 the IDE, the programs and the type system.
 4) Our code must be explicit and verbose about many interactions that
 could be implicit if we tracked user context.
 5) Programmers cannot easily adjust their environment or language to know
 what they mean, and act as they expect.

 I believe we can do much better. I'll next provide a little background
 about how this belief came to be, then what I'm envisioning.

 *Background*

 Recently, I started developing a tacit representation for an arrowized
 reactive programming model. Arrows provide a relatively rigid 'structure'
 to the program. In the tacit representation, this structure was represented
 as a stack consisting of a mix of compile-time values (text, numbers,
 blocks) and runtime signals (e.g. mouse position). Essentially, I can give
 the stack a 'static type', but I still used FORTH-like idioms to roll and
 pick items from the stack as though it were a dynamic structure. With
 just a little static introspection, I could even model `7 pick` as copying
 the seventh element of the stack to the top of the stack.

 But I didn't like this single stack environment. It felt cramped.

 Often, I desire to decompose a problem into multiple concurrent tasks or
 workflows. And when I do so, I must occasionally integrate intermediate
 results, which can involve some complex scattering and gathering
 operations. On a single stack, this integration is terribly painful: it
 involves rolling or copying intermediate signals and values upwards or
 downwards, with relative positions that are difficult to remember.
 Conclusion: a single stack is good only for a single, sequential task - a
 single pipeline, in a dataflow model.

 But then I realized: I'm not limited to modeling a stack. A stack is just
 one possible way of organizing and conceptualizing the 'type' of the arrow.
 I can model any environment I please! (I'm being serious. With the same
 level of introspection needed for `7 pick`, I could model a MUD, MOO, or
 interactive fiction in the type system.) After experimenting with tree
 zippers [2] or a list of anonymous stacks [3], I'm kind of (hopefully)
 settling on an easy-to-use environment [4] that consists of:

 * current stack
 * hand
 * current stack name
 * list of named stacks

 The current stack serves the traditional role. The 'hand' enables
 developers to 'take' and 'put' objects (and juggle a few of them, like
 'roll' except for the hand) - it's really convenient even for operating on
 a single stack, and also helps carry items between stacks (implicit data
 plumbing). The list of named stacks is achieved using compile-time
 introspection (~type matching for different stack names) and is very
 flexible:

 * different stacks for different tasks; ability to navigate to a
 different stack (goto)
 * programmers can 'load' and 'store' from a stack remotely (treat it like
 a variable or register)
 * programmers can use named stacks to record preferences and
 configuration options
 * programmers can use named stacks to store dynamic libraries of code (as
 blocks)

 As I developed this rich environment, it occurred to me that I had
 essentially integrated a user-model with the program itself. Actually, my
 first thought was closer to hey, I'm modeling a character in a game! Go go
 Data Plumber! The programmer is manipulating an avatar, navigating from
 

Re: [fonc] Programmer Models integrating Program and IDE

2013-08-28 Thread John Carlson
How does one make a recipe maker in Minecraft?  I know you can make a
recipe maker with a recipe,  but who decides what a recipe makes?  Can you
make more than one type of thing at the same time?  Can a human make more
than one type of thing at the same time?  Or a robot?
On Aug 28, 2013 5:36 PM, David Barbour dmbarb...@gmail.com wrote:

 I understand 'user modeling' [1] to broadly address long-term details
 (e.g. user preferences and settings), mid-term details (goals, tasks,
 workflow), and short-term details (focus, attention, clipboards and
 cursors, conversational context, history). The unifying principle is that
 we have more context to make smart decisions, to make systems behave in
 ways their users expect. This is a form of context sensitivity, where the
 user is explicitly part of the context.

 Programming can be understood as a form of user interface. But,
 historically, user modeling (in this case 'programmer modeling') has been
 kept carefully separate from the program itself; instead, it is instead
 part of an Integrated Development Environment (IDE)

 *Hypothesis:* the separation of user-model from program has hindered both
 programmers and the art of programming. There are several reasons for this:

 1) Our IDEs are not sufficiently smart. The context IDEs keep is
 heuristic, fragile, and can be trusted with only the simplest of tasks.
 2) Poor integration with the IDE and visual environments: it is difficult
 to assign formal meaning to gestures and programmer actions.
 3) Programmer-layer goals, tasks, and workflows are generally opaque to
 the IDE, the programs and the type system.
 4) Our code must be explicit and verbose about many interactions that
 could be implicit if we tracked user context.
 5) Programmers cannot easily adjust their environment or language to know
 what they mean, and act as they expect.

 I believe we can do much better. I'll next provide a little background
 about how this belief came to be, then what I'm envisioning.

 *Background*

 Recently, I started developing a tacit representation for an arrowized
 reactive programming model. Arrows provide a relatively rigid 'structure'
 to the program. In the tacit representation, this structure was represented
 as a stack consisting of a mix of compile-time values (text, numbers,
 blocks) and runtime signals (e.g. mouse position). Essentially, I can give
 the stack a 'static type', but I still used FORTH-like idioms to roll and
 pick items from the stack as though it were a dynamic structure. With
 just a little static introspection, I could even model `7 pick` as copying
 the seventh element of the stack to the top of the stack.

 But I didn't like this single stack environment. It felt cramped.

 Often, I desire to decompose a problem into multiple concurrent tasks or
 workflows. And when I do so, I must occasionally integrate intermediate
 results, which can involve some complex scattering and gathering
 operations. On a single stack, this integration is terribly painful: it
 involves rolling or copying intermediate signals and values upwards or
 downwards, with relative positions that are difficult to remember.
 Conclusion: a single stack is good only for a single, sequential task - a
 single pipeline, in a dataflow model.

 But then I realized: I'm not limited to modeling a stack. A stack is just
 one possible way of organizing and conceptualizing the 'type' of the arrow.
 I can model any environment I please! (I'm being serious. With the same
 level of introspection needed for `7 pick`, I could model a MUD, MOO, or
 interactive fiction in the type system.) After experimenting with tree
 zippers [2] or a list of anonymous stacks [3], I'm kind of (hopefully)
 settling on an easy-to-use environment [4] that consists of:

 * current stack
 * hand
 * current stack name
 * list of named stacks

 The current stack serves the traditional role. The 'hand' enables
 developers to 'take' and 'put' objects (and juggle a few of them, like
 'roll' except for the hand) - it's really convenient even for operating on
 a single stack, and also helps carry items between stacks (implicit data
 plumbing). The list of named stacks is achieved using compile-time
 introspection (~type matching for different stack names) and is very
 flexible:

 * different stacks for different tasks; ability to navigate to a different
 stack (goto)
 * programmers can 'load' and 'store' from a stack remotely (treat it like
 a variable or register)
 * programmers can use named stacks to record preferences and configuration
 options
 * programmers can use named stacks to store dynamic libraries of code (as
 blocks)

 As I developed this rich environment, it occurred to me that I had
 essentially integrated a user-model with the program itself. Actually, my
 first thought was closer to hey, I'm modeling a character in a game! Go go
 Data Plumber! The programmer is manipulating an avatar, navigating from
 task to task and stack to stack. The programmer has