[fonc] Programmer Models integrating Program and IDE

2013-08-28 Thread David Barbour
I understand 'user modeling' [1] to broadly address long-term details (e.g.
user preferences and settings), mid-term details (goals, tasks, workflow),
and short-term details (focus, attention, clipboards and cursors,
conversational context, history). The unifying principle is that we have
more context to make smart decisions, to make systems behave in ways their
users expect. This is a form of context sensitivity, where the user is
explicitly part of the context.

Programming can be understood as a form of user interface. But,
historically, user modeling (in this case 'programmer modeling') has been
kept carefully separate from the program itself; instead, it is instead
part of an Integrated Development Environment (IDE)

*Hypothesis:* the separation of user-model from program has hindered both
programmers and the art of programming. There are several reasons for this:

1) Our IDEs are not sufficiently smart. The context IDEs keep is heuristic,
fragile, and can be trusted with only the simplest of tasks.
2) Poor integration with the IDE and visual environments: it is difficult
to assign formal meaning to gestures and programmer actions.
3) Programmer-layer goals, tasks, and workflows are generally opaque to the
IDE, the programs and the type system.
4) Our code must be explicit and verbose about many interactions that could
be implicit if we tracked user context.
5) Programmers cannot easily adjust their environment or language to know
what they mean, and act as they expect.

I believe we can do much better. I'll next provide a little background
about how this belief came to be, then what I'm envisioning.

*Background*

Recently, I started developing a tacit representation for an arrowized
reactive programming model. Arrows provide a relatively rigid 'structure'
to the program. In the tacit representation, this structure was represented
as a stack consisting of a mix of compile-time values (text, numbers,
blocks) and runtime signals (e.g. mouse position). Essentially, I can give
the stack a 'static type', but I still used FORTH-like idioms to roll and
pick items from the stack as though it were a dynamic structure. With
just a little static introspection, I could even model `7 pick` as copying
the seventh element of the stack to the top of the stack.

But I didn't like this single stack environment. It felt cramped.

Often, I desire to decompose a problem into multiple concurrent tasks or
workflows. And when I do so, I must occasionally integrate intermediate
results, which can involve some complex scattering and gathering
operations. On a single stack, this integration is terribly painful: it
involves rolling or copying intermediate signals and values upwards or
downwards, with relative positions that are difficult to remember.
Conclusion: a single stack is good only for a single, sequential task - a
single pipeline, in a dataflow model.

But then I realized: I'm not limited to modeling a stack. A stack is just
one possible way of organizing and conceptualizing the 'type' of the arrow.
I can model any environment I please! (I'm being serious. With the same
level of introspection needed for `7 pick`, I could model a MUD, MOO, or
interactive fiction in the type system.) After experimenting with tree
zippers [2] or a list of anonymous stacks [3], I'm kind of (hopefully)
settling on an easy-to-use environment [4] that consists of:

* current stack
* hand
* current stack name
* list of named stacks

The current stack serves the traditional role. The 'hand' enables
developers to 'take' and 'put' objects (and juggle a few of them, like
'roll' except for the hand) - it's really convenient even for operating on
a single stack, and also helps carry items between stacks (implicit data
plumbing). The list of named stacks is achieved using compile-time
introspection (~type matching for different stack names) and is very
flexible:

* different stacks for different tasks; ability to navigate to a different
stack (goto)
* programmers can 'load' and 'store' from a stack remotely (treat it like a
variable or register)
* programmers can use named stacks to record preferences and configuration
options
* programmers can use named stacks to store dynamic libraries of code (as
blocks)

As I developed this rich environment, it occurred to me that I had
essentially integrated a user-model with the program itself. Actually, my
first thought was closer to hey, I'm modeling a character in a game! Go go
Data Plumber! The programmer is manipulating an avatar, navigating from
task to task and stack to stack. The programmer has items in hand, plus a
potential inventory (e.g. an inventory stack). To push metaphors a bit: I
can model keyrings full of sealer/unsealer pairs, locked rooms with sealed
values, unique 'artifacts' and 'puzzles' in the form of affine and relevant
types [5], quests goals in the form of  fractional types (representing
futures/promises) [6], and 'spellbooks' in the form of static capabilities
[7]. But in 

Re: [fonc] Programmer Models integrating Program and IDE

2013-08-28 Thread John Carlson
Multi-threaded Object-Oriented Stack Environment ... MOOSE for short.

Also check out VIPR from Wayne Citrin and friends at UC Boulder.  Also
check out AgentSheets, AgentCubes and XMLisp while you are at it.  Not far
from SimCity and friends.  Also looking at videos from unreal kismet may be
helpful if you haven't already seen them.

I think we're moving towards automated game design though, which will
become the next platform.

Good luck!
On Aug 28, 2013 5:36 PM, David Barbour dmbarb...@gmail.com wrote:

 I understand 'user modeling' [1] to broadly address long-term details
 (e.g. user preferences and settings), mid-term details (goals, tasks,
 workflow), and short-term details (focus, attention, clipboards and
 cursors, conversational context, history). The unifying principle is that
 we have more context to make smart decisions, to make systems behave in
 ways their users expect. This is a form of context sensitivity, where the
 user is explicitly part of the context.

 Programming can be understood as a form of user interface. But,
 historically, user modeling (in this case 'programmer modeling') has been
 kept carefully separate from the program itself; instead, it is instead
 part of an Integrated Development Environment (IDE)

 *Hypothesis:* the separation of user-model from program has hindered both
 programmers and the art of programming. There are several reasons for this:

 1) Our IDEs are not sufficiently smart. The context IDEs keep is
 heuristic, fragile, and can be trusted with only the simplest of tasks.
 2) Poor integration with the IDE and visual environments: it is difficult
 to assign formal meaning to gestures and programmer actions.
 3) Programmer-layer goals, tasks, and workflows are generally opaque to
 the IDE, the programs and the type system.
 4) Our code must be explicit and verbose about many interactions that
 could be implicit if we tracked user context.
 5) Programmers cannot easily adjust their environment or language to know
 what they mean, and act as they expect.

 I believe we can do much better. I'll next provide a little background
 about how this belief came to be, then what I'm envisioning.

 *Background*

 Recently, I started developing a tacit representation for an arrowized
 reactive programming model. Arrows provide a relatively rigid 'structure'
 to the program. In the tacit representation, this structure was represented
 as a stack consisting of a mix of compile-time values (text, numbers,
 blocks) and runtime signals (e.g. mouse position). Essentially, I can give
 the stack a 'static type', but I still used FORTH-like idioms to roll and
 pick items from the stack as though it were a dynamic structure. With
 just a little static introspection, I could even model `7 pick` as copying
 the seventh element of the stack to the top of the stack.

 But I didn't like this single stack environment. It felt cramped.

 Often, I desire to decompose a problem into multiple concurrent tasks or
 workflows. And when I do so, I must occasionally integrate intermediate
 results, which can involve some complex scattering and gathering
 operations. On a single stack, this integration is terribly painful: it
 involves rolling or copying intermediate signals and values upwards or
 downwards, with relative positions that are difficult to remember.
 Conclusion: a single stack is good only for a single, sequential task - a
 single pipeline, in a dataflow model.

 But then I realized: I'm not limited to modeling a stack. A stack is just
 one possible way of organizing and conceptualizing the 'type' of the arrow.
 I can model any environment I please! (I'm being serious. With the same
 level of introspection needed for `7 pick`, I could model a MUD, MOO, or
 interactive fiction in the type system.) After experimenting with tree
 zippers [2] or a list of anonymous stacks [3], I'm kind of (hopefully)
 settling on an easy-to-use environment [4] that consists of:

 * current stack
 * hand
 * current stack name
 * list of named stacks

 The current stack serves the traditional role. The 'hand' enables
 developers to 'take' and 'put' objects (and juggle a few of them, like
 'roll' except for the hand) - it's really convenient even for operating on
 a single stack, and also helps carry items between stacks (implicit data
 plumbing). The list of named stacks is achieved using compile-time
 introspection (~type matching for different stack names) and is very
 flexible:

 * different stacks for different tasks; ability to navigate to a different
 stack (goto)
 * programmers can 'load' and 'store' from a stack remotely (treat it like
 a variable or register)
 * programmers can use named stacks to record preferences and configuration
 options
 * programmers can use named stacks to store dynamic libraries of code (as
 blocks)

 As I developed this rich environment, it occurred to me that I had
 essentially integrated a user-model with the program itself. Actually, my
 first thought was closer to 

Re: [fonc] Programmer Models integrating Program and IDE

2013-08-28 Thread John Carlson
I think you should replace stack with collection, but perhaps that's too
javaesque.
On Aug 28, 2013 5:36 PM, David Barbour dmbarb...@gmail.com wrote:

 I understand 'user modeling' [1] to broadly address long-term details
 (e.g. user preferences and settings), mid-term details (goals, tasks,
 workflow), and short-term details (focus, attention, clipboards and
 cursors, conversational context, history). The unifying principle is that
 we have more context to make smart decisions, to make systems behave in
 ways their users expect. This is a form of context sensitivity, where the
 user is explicitly part of the context.

 Programming can be understood as a form of user interface. But,
 historically, user modeling (in this case 'programmer modeling') has been
 kept carefully separate from the program itself; instead, it is instead
 part of an Integrated Development Environment (IDE)

 *Hypothesis:* the separation of user-model from program has hindered both
 programmers and the art of programming. There are several reasons for this:

 1) Our IDEs are not sufficiently smart. The context IDEs keep is
 heuristic, fragile, and can be trusted with only the simplest of tasks.
 2) Poor integration with the IDE and visual environments: it is difficult
 to assign formal meaning to gestures and programmer actions.
 3) Programmer-layer goals, tasks, and workflows are generally opaque to
 the IDE, the programs and the type system.
 4) Our code must be explicit and verbose about many interactions that
 could be implicit if we tracked user context.
 5) Programmers cannot easily adjust their environment or language to know
 what they mean, and act as they expect.

 I believe we can do much better. I'll next provide a little background
 about how this belief came to be, then what I'm envisioning.

 *Background*

 Recently, I started developing a tacit representation for an arrowized
 reactive programming model. Arrows provide a relatively rigid 'structure'
 to the program. In the tacit representation, this structure was represented
 as a stack consisting of a mix of compile-time values (text, numbers,
 blocks) and runtime signals (e.g. mouse position). Essentially, I can give
 the stack a 'static type', but I still used FORTH-like idioms to roll and
 pick items from the stack as though it were a dynamic structure. With
 just a little static introspection, I could even model `7 pick` as copying
 the seventh element of the stack to the top of the stack.

 But I didn't like this single stack environment. It felt cramped.

 Often, I desire to decompose a problem into multiple concurrent tasks or
 workflows. And when I do so, I must occasionally integrate intermediate
 results, which can involve some complex scattering and gathering
 operations. On a single stack, this integration is terribly painful: it
 involves rolling or copying intermediate signals and values upwards or
 downwards, with relative positions that are difficult to remember.
 Conclusion: a single stack is good only for a single, sequential task - a
 single pipeline, in a dataflow model.

 But then I realized: I'm not limited to modeling a stack. A stack is just
 one possible way of organizing and conceptualizing the 'type' of the arrow.
 I can model any environment I please! (I'm being serious. With the same
 level of introspection needed for `7 pick`, I could model a MUD, MOO, or
 interactive fiction in the type system.) After experimenting with tree
 zippers [2] or a list of anonymous stacks [3], I'm kind of (hopefully)
 settling on an easy-to-use environment [4] that consists of:

 * current stack
 * hand
 * current stack name
 * list of named stacks

 The current stack serves the traditional role. The 'hand' enables
 developers to 'take' and 'put' objects (and juggle a few of them, like
 'roll' except for the hand) - it's really convenient even for operating on
 a single stack, and also helps carry items between stacks (implicit data
 plumbing). The list of named stacks is achieved using compile-time
 introspection (~type matching for different stack names) and is very
 flexible:

 * different stacks for different tasks; ability to navigate to a different
 stack (goto)
 * programmers can 'load' and 'store' from a stack remotely (treat it like
 a variable or register)
 * programmers can use named stacks to record preferences and configuration
 options
 * programmers can use named stacks to store dynamic libraries of code (as
 blocks)

 As I developed this rich environment, it occurred to me that I had
 essentially integrated a user-model with the program itself. Actually, my
 first thought was closer to hey, I'm modeling a character in a game! Go go
 Data Plumber! The programmer is manipulating an avatar, navigating from
 task to task and stack to stack. The programmer has items in hand, plus a
 potential inventory (e.g. an inventory stack). To push metaphors a bit: I
 can model keyrings full of sealer/unsealer pairs, locked rooms with sealed
 values, unique 

Re: [fonc] Programmer Models integrating Program and IDE

2013-08-28 Thread John Carlson
Have you considered controlling stacks, program counters and iterators from
the same basic metaphor?  We used recorder buttons.  Forward, Reverse,
Stop, Fast Forward, and Fast Reverse.  Then undo (delete previous
operation) and delete next operation.
On Aug 28, 2013 5:36 PM, David Barbour dmbarb...@gmail.com wrote:

 I understand 'user modeling' [1] to broadly address long-term details
 (e.g. user preferences and settings), mid-term details (goals, tasks,
 workflow), and short-term details (focus, attention, clipboards and
 cursors, conversational context, history). The unifying principle is that
 we have more context to make smart decisions, to make systems behave in
 ways their users expect. This is a form of context sensitivity, where the
 user is explicitly part of the context.

 Programming can be understood as a form of user interface. But,
 historically, user modeling (in this case 'programmer modeling') has been
 kept carefully separate from the program itself; instead, it is instead
 part of an Integrated Development Environment (IDE)

 *Hypothesis:* the separation of user-model from program has hindered both
 programmers and the art of programming. There are several reasons for this:

 1) Our IDEs are not sufficiently smart. The context IDEs keep is
 heuristic, fragile, and can be trusted with only the simplest of tasks.
 2) Poor integration with the IDE and visual environments: it is difficult
 to assign formal meaning to gestures and programmer actions.
 3) Programmer-layer goals, tasks, and workflows are generally opaque to
 the IDE, the programs and the type system.
 4) Our code must be explicit and verbose about many interactions that
 could be implicit if we tracked user context.
 5) Programmers cannot easily adjust their environment or language to know
 what they mean, and act as they expect.

 I believe we can do much better. I'll next provide a little background
 about how this belief came to be, then what I'm envisioning.

 *Background*

 Recently, I started developing a tacit representation for an arrowized
 reactive programming model. Arrows provide a relatively rigid 'structure'
 to the program. In the tacit representation, this structure was represented
 as a stack consisting of a mix of compile-time values (text, numbers,
 blocks) and runtime signals (e.g. mouse position). Essentially, I can give
 the stack a 'static type', but I still used FORTH-like idioms to roll and
 pick items from the stack as though it were a dynamic structure. With
 just a little static introspection, I could even model `7 pick` as copying
 the seventh element of the stack to the top of the stack.

 But I didn't like this single stack environment. It felt cramped.

 Often, I desire to decompose a problem into multiple concurrent tasks or
 workflows. And when I do so, I must occasionally integrate intermediate
 results, which can involve some complex scattering and gathering
 operations. On a single stack, this integration is terribly painful: it
 involves rolling or copying intermediate signals and values upwards or
 downwards, with relative positions that are difficult to remember.
 Conclusion: a single stack is good only for a single, sequential task - a
 single pipeline, in a dataflow model.

 But then I realized: I'm not limited to modeling a stack. A stack is just
 one possible way of organizing and conceptualizing the 'type' of the arrow.
 I can model any environment I please! (I'm being serious. With the same
 level of introspection needed for `7 pick`, I could model a MUD, MOO, or
 interactive fiction in the type system.) After experimenting with tree
 zippers [2] or a list of anonymous stacks [3], I'm kind of (hopefully)
 settling on an easy-to-use environment [4] that consists of:

 * current stack
 * hand
 * current stack name
 * list of named stacks

 The current stack serves the traditional role. The 'hand' enables
 developers to 'take' and 'put' objects (and juggle a few of them, like
 'roll' except for the hand) - it's really convenient even for operating on
 a single stack, and also helps carry items between stacks (implicit data
 plumbing). The list of named stacks is achieved using compile-time
 introspection (~type matching for different stack names) and is very
 flexible:

 * different stacks for different tasks; ability to navigate to a different
 stack (goto)
 * programmers can 'load' and 'store' from a stack remotely (treat it like
 a variable or register)
 * programmers can use named stacks to record preferences and configuration
 options
 * programmers can use named stacks to store dynamic libraries of code (as
 blocks)

 As I developed this rich environment, it occurred to me that I had
 essentially integrated a user-model with the program itself. Actually, my
 first thought was closer to hey, I'm modeling a character in a game! Go go
 Data Plumber! The programmer is manipulating an avatar, navigating from
 task to task and stack to stack. The programmer has items in hand, plus a
 

Re: [fonc] Programmer Models integrating Program and IDE

2013-08-28 Thread John Carlson
Along with the recorder metaphor we added breakpoints which worked
travelling in either direction in the code.
On Aug 28, 2013 8:39 PM, John Carlson yottz...@gmail.com wrote:

 You'd probably want to add copy and paste as well.
 On Aug 28, 2013 8:29 PM, John Carlson yottz...@gmail.com wrote:

 Have you considered controlling stacks, program counters and iterators
 from the same basic metaphor?  We used recorder buttons.  Forward, Reverse,
 Stop, Fast Forward, and Fast Reverse.  Then undo (delete previous
 operation) and delete next operation.
 On Aug 28, 2013 5:36 PM, David Barbour dmbarb...@gmail.com wrote:

 I understand 'user modeling' [1] to broadly address long-term details
 (e.g. user preferences and settings), mid-term details (goals, tasks,
 workflow), and short-term details (focus, attention, clipboards and
 cursors, conversational context, history). The unifying principle is that
 we have more context to make smart decisions, to make systems behave in
 ways their users expect. This is a form of context sensitivity, where the
 user is explicitly part of the context.

 Programming can be understood as a form of user interface. But,
 historically, user modeling (in this case 'programmer modeling') has been
 kept carefully separate from the program itself; instead, it is instead
 part of an Integrated Development Environment (IDE)

 *Hypothesis:* the separation of user-model from program has hindered
 both programmers and the art of programming. There are several reasons for
 this:

 1) Our IDEs are not sufficiently smart. The context IDEs keep is
 heuristic, fragile, and can be trusted with only the simplest of tasks.
 2) Poor integration with the IDE and visual environments: it is
 difficult to assign formal meaning to gestures and programmer actions.
 3) Programmer-layer goals, tasks, and workflows are generally opaque to
 the IDE, the programs and the type system.
 4) Our code must be explicit and verbose about many interactions that
 could be implicit if we tracked user context.
 5) Programmers cannot easily adjust their environment or language to
 know what they mean, and act as they expect.

 I believe we can do much better. I'll next provide a little background
 about how this belief came to be, then what I'm envisioning.

 *Background*

 Recently, I started developing a tacit representation for an arrowized
 reactive programming model. Arrows provide a relatively rigid 'structure'
 to the program. In the tacit representation, this structure was represented
 as a stack consisting of a mix of compile-time values (text, numbers,
 blocks) and runtime signals (e.g. mouse position). Essentially, I can give
 the stack a 'static type', but I still used FORTH-like idioms to roll and
 pick items from the stack as though it were a dynamic structure. With
 just a little static introspection, I could even model `7 pick` as copying
 the seventh element of the stack to the top of the stack.

 But I didn't like this single stack environment. It felt cramped.

 Often, I desire to decompose a problem into multiple concurrent tasks or
 workflows. And when I do so, I must occasionally integrate intermediate
 results, which can involve some complex scattering and gathering
 operations. On a single stack, this integration is terribly painful: it
 involves rolling or copying intermediate signals and values upwards or
 downwards, with relative positions that are difficult to remember.
 Conclusion: a single stack is good only for a single, sequential task - a
 single pipeline, in a dataflow model.

 But then I realized: I'm not limited to modeling a stack. A stack is
 just one possible way of organizing and conceptualizing the 'type' of the
 arrow. I can model any environment I please! (I'm being serious. With the
 same level of introspection needed for `7 pick`, I could model a MUD, MOO,
 or interactive fiction in the type system.) After experimenting with tree
 zippers [2] or a list of anonymous stacks [3], I'm kind of (hopefully)
 settling on an easy-to-use environment [4] that consists of:

 * current stack
 * hand
 * current stack name
 * list of named stacks

 The current stack serves the traditional role. The 'hand' enables
 developers to 'take' and 'put' objects (and juggle a few of them, like
 'roll' except for the hand) - it's really convenient even for operating on
 a single stack, and also helps carry items between stacks (implicit data
 plumbing). The list of named stacks is achieved using compile-time
 introspection (~type matching for different stack names) and is very
 flexible:

 * different stacks for different tasks; ability to navigate to a
 different stack (goto)
 * programmers can 'load' and 'store' from a stack remotely (treat it
 like a variable or register)
 * programmers can use named stacks to record preferences and
 configuration options
 * programmers can use named stacks to store dynamic libraries of code
 (as blocks)

 As I developed this rich environment, it occurred to me that I had

Re: [fonc] Programmer Models integrating Program and IDE

2013-08-28 Thread John Carlson
We used static breakpoints and relative cursors.  Making the breakpoints
more dynamic would be an interesting research project.   We were able to
make cursors in text dependent on each other.
On Aug 28, 2013 8:58 PM, John Carlson yottz...@gmail.com wrote:

 Along with the recorder metaphor we added breakpoints which worked
 travelling in either direction in the code.
 On Aug 28, 2013 8:39 PM, John Carlson yottz...@gmail.com wrote:

 You'd probably want to add copy and paste as well.
 On Aug 28, 2013 8:29 PM, John Carlson yottz...@gmail.com wrote:

 Have you considered controlling stacks, program counters and iterators
 from the same basic metaphor?  We used recorder buttons.  Forward, Reverse,
 Stop, Fast Forward, and Fast Reverse.  Then undo (delete previous
 operation) and delete next operation.
 On Aug 28, 2013 5:36 PM, David Barbour dmbarb...@gmail.com wrote:

 I understand 'user modeling' [1] to broadly address long-term details
 (e.g. user preferences and settings), mid-term details (goals, tasks,
 workflow), and short-term details (focus, attention, clipboards and
 cursors, conversational context, history). The unifying principle is that
 we have more context to make smart decisions, to make systems behave in
 ways their users expect. This is a form of context sensitivity, where the
 user is explicitly part of the context.

 Programming can be understood as a form of user interface. But,
 historically, user modeling (in this case 'programmer modeling') has been
 kept carefully separate from the program itself; instead, it is instead
 part of an Integrated Development Environment (IDE)

 *Hypothesis:* the separation of user-model from program has hindered
 both programmers and the art of programming. There are several reasons for
 this:

 1) Our IDEs are not sufficiently smart. The context IDEs keep is
 heuristic, fragile, and can be trusted with only the simplest of tasks.
 2) Poor integration with the IDE and visual environments: it is
 difficult to assign formal meaning to gestures and programmer actions.
 3) Programmer-layer goals, tasks, and workflows are generally opaque to
 the IDE, the programs and the type system.
 4) Our code must be explicit and verbose about many interactions that
 could be implicit if we tracked user context.
 5) Programmers cannot easily adjust their environment or language to
 know what they mean, and act as they expect.

 I believe we can do much better. I'll next provide a little background
 about how this belief came to be, then what I'm envisioning.

 *Background*

 Recently, I started developing a tacit representation for an arrowized
 reactive programming model. Arrows provide a relatively rigid 'structure'
 to the program. In the tacit representation, this structure was represented
 as a stack consisting of a mix of compile-time values (text, numbers,
 blocks) and runtime signals (e.g. mouse position). Essentially, I can give
 the stack a 'static type', but I still used FORTH-like idioms to roll and
 pick items from the stack as though it were a dynamic structure. With
 just a little static introspection, I could even model `7 pick` as copying
 the seventh element of the stack to the top of the stack.

 But I didn't like this single stack environment. It felt cramped.

 Often, I desire to decompose a problem into multiple concurrent tasks
 or workflows. And when I do so, I must occasionally integrate intermediate
 results, which can involve some complex scattering and gathering
 operations. On a single stack, this integration is terribly painful: it
 involves rolling or copying intermediate signals and values upwards or
 downwards, with relative positions that are difficult to remember.
 Conclusion: a single stack is good only for a single, sequential task - a
 single pipeline, in a dataflow model.

 But then I realized: I'm not limited to modeling a stack. A stack is
 just one possible way of organizing and conceptualizing the 'type' of the
 arrow. I can model any environment I please! (I'm being serious. With the
 same level of introspection needed for `7 pick`, I could model a MUD, MOO,
 or interactive fiction in the type system.) After experimenting with tree
 zippers [2] or a list of anonymous stacks [3], I'm kind of (hopefully)
 settling on an easy-to-use environment [4] that consists of:

 * current stack
 * hand
 * current stack name
 * list of named stacks

 The current stack serves the traditional role. The 'hand' enables
 developers to 'take' and 'put' objects (and juggle a few of them, like
 'roll' except for the hand) - it's really convenient even for operating on
 a single stack, and also helps carry items between stacks (implicit data
 plumbing). The list of named stacks is achieved using compile-time
 introspection (~type matching for different stack names) and is very
 flexible:

 * different stacks for different tasks; ability to navigate to a
 different stack (goto)
 * programmers can 'load' and 'store' from a stack remotely (treat it
 like a 

Re: [fonc] Programmer Models integrating Program and IDE

2013-08-28 Thread John Carlson
We also had the concepts of text region definition, which was persistent,
and text region, which was computed at runtime.  Same with text location
definition and text location.  That way, we could adapt to different
inputs.
On Aug 28, 2013 5:36 PM, David Barbour dmbarb...@gmail.com wrote:

 I understand 'user modeling' [1] to broadly address long-term details
 (e.g. user preferences and settings), mid-term details (goals, tasks,
 workflow), and short-term details (focus, attention, clipboards and
 cursors, conversational context, history). The unifying principle is that
 we have more context to make smart decisions, to make systems behave in
 ways their users expect. This is a form of context sensitivity, where the
 user is explicitly part of the context.

 Programming can be understood as a form of user interface. But,
 historically, user modeling (in this case 'programmer modeling') has been
 kept carefully separate from the program itself; instead, it is instead
 part of an Integrated Development Environment (IDE)

 *Hypothesis:* the separation of user-model from program has hindered both
 programmers and the art of programming. There are several reasons for this:

 1) Our IDEs are not sufficiently smart. The context IDEs keep is
 heuristic, fragile, and can be trusted with only the simplest of tasks.
 2) Poor integration with the IDE and visual environments: it is difficult
 to assign formal meaning to gestures and programmer actions.
 3) Programmer-layer goals, tasks, and workflows are generally opaque to
 the IDE, the programs and the type system.
 4) Our code must be explicit and verbose about many interactions that
 could be implicit if we tracked user context.
 5) Programmers cannot easily adjust their environment or language to know
 what they mean, and act as they expect.

 I believe we can do much better. I'll next provide a little background
 about how this belief came to be, then what I'm envisioning.

 *Background*

 Recently, I started developing a tacit representation for an arrowized
 reactive programming model. Arrows provide a relatively rigid 'structure'
 to the program. In the tacit representation, this structure was represented
 as a stack consisting of a mix of compile-time values (text, numbers,
 blocks) and runtime signals (e.g. mouse position). Essentially, I can give
 the stack a 'static type', but I still used FORTH-like idioms to roll and
 pick items from the stack as though it were a dynamic structure. With
 just a little static introspection, I could even model `7 pick` as copying
 the seventh element of the stack to the top of the stack.

 But I didn't like this single stack environment. It felt cramped.

 Often, I desire to decompose a problem into multiple concurrent tasks or
 workflows. And when I do so, I must occasionally integrate intermediate
 results, which can involve some complex scattering and gathering
 operations. On a single stack, this integration is terribly painful: it
 involves rolling or copying intermediate signals and values upwards or
 downwards, with relative positions that are difficult to remember.
 Conclusion: a single stack is good only for a single, sequential task - a
 single pipeline, in a dataflow model.

 But then I realized: I'm not limited to modeling a stack. A stack is just
 one possible way of organizing and conceptualizing the 'type' of the arrow.
 I can model any environment I please! (I'm being serious. With the same
 level of introspection needed for `7 pick`, I could model a MUD, MOO, or
 interactive fiction in the type system.) After experimenting with tree
 zippers [2] or a list of anonymous stacks [3], I'm kind of (hopefully)
 settling on an easy-to-use environment [4] that consists of:

 * current stack
 * hand
 * current stack name
 * list of named stacks

 The current stack serves the traditional role. The 'hand' enables
 developers to 'take' and 'put' objects (and juggle a few of them, like
 'roll' except for the hand) - it's really convenient even for operating on
 a single stack, and also helps carry items between stacks (implicit data
 plumbing). The list of named stacks is achieved using compile-time
 introspection (~type matching for different stack names) and is very
 flexible:

 * different stacks for different tasks; ability to navigate to a different
 stack (goto)
 * programmers can 'load' and 'store' from a stack remotely (treat it like
 a variable or register)
 * programmers can use named stacks to record preferences and configuration
 options
 * programmers can use named stacks to store dynamic libraries of code (as
 blocks)

 As I developed this rich environment, it occurred to me that I had
 essentially integrated a user-model with the program itself. Actually, my
 first thought was closer to hey, I'm modeling a character in a game! Go go
 Data Plumber! The programmer is manipulating an avatar, navigating from
 task to task and stack to stack. The programmer has items in hand, plus a
 potential inventory (e.g. 

Re: [fonc] Programmer Models integrating Program and IDE

2013-08-28 Thread John Carlson
These may be your details.
On Aug 28, 2013 9:25 PM, John Carlson yottz...@gmail.com wrote:

 We also had the concepts of text region definition, which was persistent,
 and text region, which was computed at runtime.  Same with text location
 definition and text location.  That way, we could adapt to different
 inputs.
 On Aug 28, 2013 5:36 PM, David Barbour dmbarb...@gmail.com wrote:

 I understand 'user modeling' [1] to broadly address long-term details
 (e.g. user preferences and settings), mid-term details (goals, tasks,
 workflow), and short-term details (focus, attention, clipboards and
 cursors, conversational context, history). The unifying principle is that
 we have more context to make smart decisions, to make systems behave in
 ways their users expect. This is a form of context sensitivity, where the
 user is explicitly part of the context.

 Programming can be understood as a form of user interface. But,
 historically, user modeling (in this case 'programmer modeling') has been
 kept carefully separate from the program itself; instead, it is instead
 part of an Integrated Development Environment (IDE)

 *Hypothesis:* the separation of user-model from program has hindered
 both programmers and the art of programming. There are several reasons for
 this:

 1) Our IDEs are not sufficiently smart. The context IDEs keep is
 heuristic, fragile, and can be trusted with only the simplest of tasks.
 2) Poor integration with the IDE and visual environments: it is difficult
 to assign formal meaning to gestures and programmer actions.
 3) Programmer-layer goals, tasks, and workflows are generally opaque to
 the IDE, the programs and the type system.
 4) Our code must be explicit and verbose about many interactions that
 could be implicit if we tracked user context.
 5) Programmers cannot easily adjust their environment or language to know
 what they mean, and act as they expect.

 I believe we can do much better. I'll next provide a little background
 about how this belief came to be, then what I'm envisioning.

 *Background*

 Recently, I started developing a tacit representation for an arrowized
 reactive programming model. Arrows provide a relatively rigid 'structure'
 to the program. In the tacit representation, this structure was represented
 as a stack consisting of a mix of compile-time values (text, numbers,
 blocks) and runtime signals (e.g. mouse position). Essentially, I can give
 the stack a 'static type', but I still used FORTH-like idioms to roll and
 pick items from the stack as though it were a dynamic structure. With
 just a little static introspection, I could even model `7 pick` as copying
 the seventh element of the stack to the top of the stack.

 But I didn't like this single stack environment. It felt cramped.

 Often, I desire to decompose a problem into multiple concurrent tasks or
 workflows. And when I do so, I must occasionally integrate intermediate
 results, which can involve some complex scattering and gathering
 operations. On a single stack, this integration is terribly painful: it
 involves rolling or copying intermediate signals and values upwards or
 downwards, with relative positions that are difficult to remember.
 Conclusion: a single stack is good only for a single, sequential task - a
 single pipeline, in a dataflow model.

 But then I realized: I'm not limited to modeling a stack. A stack is just
 one possible way of organizing and conceptualizing the 'type' of the arrow.
 I can model any environment I please! (I'm being serious. With the same
 level of introspection needed for `7 pick`, I could model a MUD, MOO, or
 interactive fiction in the type system.) After experimenting with tree
 zippers [2] or a list of anonymous stacks [3], I'm kind of (hopefully)
 settling on an easy-to-use environment [4] that consists of:

 * current stack
 * hand
 * current stack name
 * list of named stacks

 The current stack serves the traditional role. The 'hand' enables
 developers to 'take' and 'put' objects (and juggle a few of them, like
 'roll' except for the hand) - it's really convenient even for operating on
 a single stack, and also helps carry items between stacks (implicit data
 plumbing). The list of named stacks is achieved using compile-time
 introspection (~type matching for different stack names) and is very
 flexible:

 * different stacks for different tasks; ability to navigate to a
 different stack (goto)
 * programmers can 'load' and 'store' from a stack remotely (treat it like
 a variable or register)
 * programmers can use named stacks to record preferences and
 configuration options
 * programmers can use named stacks to store dynamic libraries of code (as
 blocks)

 As I developed this rich environment, it occurred to me that I had
 essentially integrated a user-model with the program itself. Actually, my
 first thought was closer to hey, I'm modeling a character in a game! Go go
 Data Plumber! The programmer is manipulating an avatar, navigating from
 

Re: [fonc] Programmer Models integrating Program and IDE

2013-08-28 Thread John Carlson
How does one make a recipe maker in Minecraft?  I know you can make a
recipe maker with a recipe,  but who decides what a recipe makes?  Can you
make more than one type of thing at the same time?  Can a human make more
than one type of thing at the same time?  Or a robot?
On Aug 28, 2013 5:36 PM, David Barbour dmbarb...@gmail.com wrote:

 I understand 'user modeling' [1] to broadly address long-term details
 (e.g. user preferences and settings), mid-term details (goals, tasks,
 workflow), and short-term details (focus, attention, clipboards and
 cursors, conversational context, history). The unifying principle is that
 we have more context to make smart decisions, to make systems behave in
 ways their users expect. This is a form of context sensitivity, where the
 user is explicitly part of the context.

 Programming can be understood as a form of user interface. But,
 historically, user modeling (in this case 'programmer modeling') has been
 kept carefully separate from the program itself; instead, it is instead
 part of an Integrated Development Environment (IDE)

 *Hypothesis:* the separation of user-model from program has hindered both
 programmers and the art of programming. There are several reasons for this:

 1) Our IDEs are not sufficiently smart. The context IDEs keep is
 heuristic, fragile, and can be trusted with only the simplest of tasks.
 2) Poor integration with the IDE and visual environments: it is difficult
 to assign formal meaning to gestures and programmer actions.
 3) Programmer-layer goals, tasks, and workflows are generally opaque to
 the IDE, the programs and the type system.
 4) Our code must be explicit and verbose about many interactions that
 could be implicit if we tracked user context.
 5) Programmers cannot easily adjust their environment or language to know
 what they mean, and act as they expect.

 I believe we can do much better. I'll next provide a little background
 about how this belief came to be, then what I'm envisioning.

 *Background*

 Recently, I started developing a tacit representation for an arrowized
 reactive programming model. Arrows provide a relatively rigid 'structure'
 to the program. In the tacit representation, this structure was represented
 as a stack consisting of a mix of compile-time values (text, numbers,
 blocks) and runtime signals (e.g. mouse position). Essentially, I can give
 the stack a 'static type', but I still used FORTH-like idioms to roll and
 pick items from the stack as though it were a dynamic structure. With
 just a little static introspection, I could even model `7 pick` as copying
 the seventh element of the stack to the top of the stack.

 But I didn't like this single stack environment. It felt cramped.

 Often, I desire to decompose a problem into multiple concurrent tasks or
 workflows. And when I do so, I must occasionally integrate intermediate
 results, which can involve some complex scattering and gathering
 operations. On a single stack, this integration is terribly painful: it
 involves rolling or copying intermediate signals and values upwards or
 downwards, with relative positions that are difficult to remember.
 Conclusion: a single stack is good only for a single, sequential task - a
 single pipeline, in a dataflow model.

 But then I realized: I'm not limited to modeling a stack. A stack is just
 one possible way of organizing and conceptualizing the 'type' of the arrow.
 I can model any environment I please! (I'm being serious. With the same
 level of introspection needed for `7 pick`, I could model a MUD, MOO, or
 interactive fiction in the type system.) After experimenting with tree
 zippers [2] or a list of anonymous stacks [3], I'm kind of (hopefully)
 settling on an easy-to-use environment [4] that consists of:

 * current stack
 * hand
 * current stack name
 * list of named stacks

 The current stack serves the traditional role. The 'hand' enables
 developers to 'take' and 'put' objects (and juggle a few of them, like
 'roll' except for the hand) - it's really convenient even for operating on
 a single stack, and also helps carry items between stacks (implicit data
 plumbing). The list of named stacks is achieved using compile-time
 introspection (~type matching for different stack names) and is very
 flexible:

 * different stacks for different tasks; ability to navigate to a different
 stack (goto)
 * programmers can 'load' and 'store' from a stack remotely (treat it like
 a variable or register)
 * programmers can use named stacks to record preferences and configuration
 options
 * programmers can use named stacks to store dynamic libraries of code (as
 blocks)

 As I developed this rich environment, it occurred to me that I had
 essentially integrated a user-model with the program itself. Actually, my
 first thought was closer to hey, I'm modeling a character in a game! Go go
 Data Plumber! The programmer is manipulating an avatar, navigating from
 task to task and stack to stack. The programmer has