[fwd to fonc]

Use of tree zippers to model multi-media documents in the type system is an
interesting possibility. It seems obvious in hindsight, but I had been
focusing on other problem spaces.

Hmm. I wonder if it might be intuitive to place the doc as an object on the
stack, then use the stack for the up/down (inclusion/extrusion) zipper ops,
allowing ops on sub-docs, as opposed to always keeping the full tree as the
top stack item. OTOH, either approach would be limited to one cursor.

What are you envisioning when you say "multiple cursors"? I can't think how
to do that without picking the doc apart and essentially modeling
hyperlinks (I.e. putting different divs on different named stacks so I can
have a different cursor in each div, then using a logical href to docs on
other stacks). This might or might not fit what you're imagining.

(I can easily model full multi-stack environments as first-class types.
This might also be a favorable approach to representing docs.)

Model transform by example sounds like something this design could be very
good for. Actually, I was imagining some of Bret Victor's drawing examples
(where it builds a procedure) would also be a good fit.

My language has a name: Awelon.  But thanks for offering the name of your
old project. :)


On Aug 29, 2013 2:11 PM, "John Carlson" <[email protected]> wrote:

> I was suggesting MOOSE as a working name for your project.
>
> I used to keep a list of features for MOOSE that I wanted to develop.
> MOOSE (future) was the next step beyond TWB/TE (now) that never got
> funded.  TWB was single threaded for the most part.  I have done some work
> on creating multiple recorder desktop objects. MOOSE would have had a way
> to create new desktop objects as types, instead of creating them in C++.
> There would have been way to create aggregate desktop objects, either as
> lists or maps.  I would have provided better navigation for Forms, which
> are essentially used for XML  and EDI/X12.  One thing I recall wanting to
> add was some kind of parser for desktop objects in addition to text file
> parsers and C++ persistent object parsers.
>
> RPN was only for the calculator.  The other stack that we had was the undo
> stack for reversible debugging.
>
> I believe an extension to VIPR was to add object visualization to the
> pipeline.  The reason I pointed you at VIPR is that the programmer model is
> similar to ours.
>
> I found that document was the best implementation I had of of tree
> zipper.  You could focus the activity anywhere in the document.  I tried to
> do form as a tree zipper, but limited movement made it difficult to use.  I
> ruined a demo by focusing on the form too much.  At one point, I could kind
> of drag the icon on the form to the document and produce a text document
> from the form (and vica versa).  I think I also worked on dragging the
> recorder icon to the document.   This would have converted the iconic
> representation to the C++ representation.
>
> All the MOOSE extensions after TWB/TE left production were rather
> experimental in nature.
>
> I suggest you might use a multimedia document as the visualization of your
> tree zipper.  Then have multiple cursors which might rely on each other to
> manipulate the tree.
>
> Check out end-user programming and model transformation by demonstration
> for more recent ideas.
>  On Aug 29, 2013 2:37 AM, "David Barbour" <[email protected]> wrote:
>
>>
>> On Wed, Aug 28, 2013 at 5:57 PM, John Carlson <[email protected]> wrote:
>>
>>> Multi-threaded Object-Oriented Stack Environment ... MOOSE for short.
>>
>>
>> Would you mind pointing me to some documentation? I found your document
>> on "A Visual Language for Data Mapping" but it doesn't discuss MOOSE. From
>> the intro thread, my best guess is that you added objects to and arrays to
>> your RPN language? But I'm not sure how the multi-threading is involved.
>>
>>
>>> Also check out VIPR from Wayne Citrin and friends at UC Boulder. Also
>>> check out AgentSheets, AgentCubes and XMLisp while you are at it.  Not far
>>> from SimCity and friends. Also looking at videos from unreal kismet may be
>>> helpful if you haven't already seen them.
>>
>>
>> I've now checked these out.  I am curious what led you to recommend them.
>>
>> To clarify, my interest in visual programming is about finding a way to
>> unify HCI with programming and vice versa. To make the 'programmer-model' a
>> formal part of the 'program' is, I now believe, the most promising step in
>> that direction after live programming. As I described (but did not clarify)
>> this enables the IDE to be very thin, primarily a way of rendering a
>> program and extending it. The bulk of the logic of the IDE, potentially
>> even the menu systems, is shifted into the program itself.
>>
>> (While I am interested in game development, my mention of it was intended
>> more as a declaration of expressiveness than a purpose.)
>>
>> Croquet - with its pervasively hackable user environment - is much closer
>> to what I'm looking for than AgentCubes. But even Croquet still has a
>> strong separation between 'interacting with objects' and 'programming'.
>>
>> Other impressions:
>>
>> VIPR - Visual Imperative PRogramming - seems to be exploring visual
>> representations. I was confused that they did not address acquisition or
>> assignment of data - those would be the most important edges in data-flow
>> systems. But I guess VIPR is more a control-flow model than a data-flow.
>> One good point. made repeatedly in the VIPR papers is that we need to avoid
>> "edges" because they create complexity that is difficult to comprehend,
>> especially as we zoom away from the graph.
>>
>> I do like that Kismet is making reactive computation accessible and
>> useful to a couple million people.
>>
>>
>>
>>>
>>> I think you should replace stack with collection
>>>
>>
>> I could model a number of different collections, within the limit that it
>> be constructed of products (pairs) to fit the 
>> arrowized<http://en.wikipedia.org/wiki/Arrow_(computer_science)>semantics. 
>> So far I've modeled:
>>
>> * one stack (operate only near top - take, put, roll; no navigation)
>> * list zippers (navigational interface in one dimension: stepLeft,
>> stepRight)
>> * tree zipper (two-dimensional navigation in a tree; up, down, left,
>> right)
>> * list zipper of stacks (stepLeft, stepRight, take, put, roll)
>> * named stacks via metaprogramming (ad-hoc navigation: "foo" goto)
>>
>> The tree-zipper is the most expressive I can achieve without
>> metaprogramming.
>>
>> The more expressive collections, however, are not necessarily "good".
>> After building the tree zipper, I couldn't figure out how I wanted to use
>> it. Same for the list zipper, though the 'hand' concept serves a similar
>> role (take and put instead of stepLeft and stepRight). For a list of
>> anonymous stacks: I tend to stick around on one stack for a while, and
>> forget the relative positions of other stacks. That's why I eventually went
>> for named stacks.
>>
>>
>>>
>>> Have you considered controlling stacks, program counters and iterators
>>> from the same basic metaphor? We used recorder buttons. Forward, Reverse,
>>> Stop, Fast Forward, and Fast Reverse.  Then undo (delete previous
>>> operation) and delete next operation. [..] You'd probably want to add copy
>>> and paste as well. [..] Along with the recorder metaphor we added
>>> breakpoints which worked travelling in either direction in the code.
>>
>>
>> My language doesn't have runtime stacks, program counters, or iterators.
>> But I've mentioned viewing and animating parts of the compile-time history.
>>
>>
>>>
>>
>> I know you can make a recipe maker with a recipe,  but who decides what a
>>> recipe makes?
>>
>>
>> Another recipe maker; you need to bootstrap.
>>
>>
>>
>> Can you make more than one type of thing at the same time?  Can a human
>>> make more than one type of thing at the same time?  Or a robot?
>>
>>
>> Living humans are always making more than one type of thing at a time. I
>> mean, unless you discount 'perspiration' and 'CO2' and 'heat' and 'sound'
>> and a bunch of other products I don't care to mention. I imagine the same
>> could be said for robots.  ;)
>>
>> Humans can do a lot once it's shifted into their subconscious thoughts.
>> But their eye focus is about the size of a dime at arms length, and they
>> aren't very good at consciously focusing on more than one problem at a
>> time. Robots, however, are only limited by their mobility, sensors,
>> actuators, processors, programming, and resources. Okay... that's a lot of
>> limits. But if you had the funds and the time and the skills, you could
>> build a robot that can make more than one thing at a time.
>>
>>
>> _______________________________________________
>> fonc mailing list
>> [email protected]
>> http://vpri.org/mailman/listinfo/fonc
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "Augmented Programming" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to
> [email protected].
> Visit this group at http://groups.google.com/group/augmented-programming.
> For more options, visit https://groups.google.com/groups/opt_out.
>
_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to