Re: [fonc] HotDraw's Tool State Machine Editor

2011-08-01 Thread K. K. Subramaniam
On Monday 01 Aug 2011 1:36:32 AM BGB wrote:
 if so, I guess the difference now would be that modern people tend to 
 have a different perspective WRT numbers, thinking more of linear spaces 
 with digit rollover (more like an odometer or similar), hence to the 
 modern mind the roman-numeral system seems far less sane.
It is not that Roman numerals were insane ;-) but they served a specific 
purpose very well (e.g. tallying). Abaci or suanpans were machines that helped 
people tote up numbers in a flash. But coming up with higher notions like 
exponents (10^100) would have been very difficult in these systems. Notions and 
notations have to support each other.

this being because at base-2, the rules are a bit more elegant, more
like logic ops, whereas at base 10 they are a little more arbitrary, and
base-16 builds directly on the base 2 rules.
Exactly. Notice how the choice of hexadecimal or binary notations make it 
easier to think and deal with, say, switch settings or instruction decodes. 
Often patterns jump out at you.

Subbu

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] HotDraw's Tool State Machine Editor

2011-07-30 Thread K. K. Subramaniam
On Thursday 28 Jul 2011 10:27:26 PM Alan Kay wrote:
 Well, we don't absolutely need music notation, but it really helps many 
 things. We don't need the various notations of mathematics (check out
 Newton's  use of English for complex mathematical relationships in the
 Principia), but it really helps things.
I would consider notions and notations as distinct entities but they often 
feed each other in a symbiotic relationship. Take decimal system for instance. 
The invention and refinement of decimal numerals made many higher notions 
possible. These would hindered with Roman numerals. Notations need to be 
carefully designed to assist in communicating notions to others or to connect 
notions together.

BTW, the bias ;-) towards written forms in computing should not blind us to 
the fact that speech is a form of notation too. The speech-notion connection 
has been studied thousands of years before written notations (cf. Sphota, 
Logos or Vakyapadiya entries in Wikipedia).

Subbu

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] HotDraw's Tool State Machine Editor

2011-07-30 Thread Alan Kay
By the way, a wonderful example of the QWERTY phenomenon is that both the 
Greeks and the Romans actually did calculations with an on-table or 
on-the-ground abacus that did have a zero (the term for the small stone 
employed 
was a calculus) but used a much older set of conventions for writing numbers 
down.

(One can imagine the different temperaments involved in the odd arrangement 
above -- which is very much many such odd arrangements around us in the world 
today ...)

Cheers,

Alan





From: K. K. Subramaniam kksubbu...@gmail.com
To: fonc@vpri.org
Cc: Alan Kay alan.n...@yahoo.com
Sent: Sat, July 30, 2011 3:09:39 PM
Subject: Re: [fonc] HotDraw's Tool State Machine Editor

On Thursday 28 Jul 2011 10:27:26 PM Alan Kay wrote:
 Well, we don't absolutely need music notation, but it really helps many 
 things. We don't need the various notations of mathematics (check out
 Newton's  use of English for complex mathematical relationships in the
 Principia), but it really helps things.
I would consider notions and notations as distinct entities but they often 
feed each other in a symbiotic relationship. Take decimal system for instance. 
The invention and refinement of decimal numerals made many higher notions 
possible. These would hindered with Roman numerals. Notations need to be 
carefully designed to assist in communicating notions to others or to connect 
notions together.

BTW, the bias ;-) towards written forms in computing should not blind us to 
the fact that speech is a form of notation too. The speech-notion connection 
has been studied thousands of years before written notations (cf. Sphota, 
Logos or Vakyapadiya entries in Wikipedia).

Subbu
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] HotDraw's Tool State Machine Editor

2011-07-29 Thread Quentin Mathé
Le 28 juil. 2011 à 18:57, Alan Kay a écrit :

 Well, we don't absolutely *need* music notation, but it really helps many 
 things. We don't *need* the various notations of mathematics (check out 
 Newton's use of English for complex mathematical relationships in the 
 Principia), but it really helps things.
 
 I do think the hard problem is design and all that goes along with it (and 
 this is true in music and math too). But that is not the end of it, nor is 
 ignoring the design of visual representations that help grokking and thinking 
 a good idea.
 
 I think you are confusing/convolving the fact of being able to do something 
 with the ease of it. This confusion is rampant in computer science 

I had never thought about notation variations in other domains such as music 
(e.g. percussion notation), and how specialized notations are used in a 
DSL-like manner. This sounds like a convincing argument indeed, and gives me a 
new perspective to think about DSLs.
Mathematics notations looks then like a large and extensible collection of 
small DSLs (e.g. matrix, integration etc.) where the line between the core 
notation and specialized notations is blurred.

Also this makes me realize that the DSL vs framework tension I was discussing 
is very similar to the tension that exists between top-down vs bottom-up design.
You can build the framework first (the core abstractions) and very late in the 
development add a DSL (the best way to represent and manipulate these 
abstractions), or do it the other way around, start by thinking about what the 
optimal DSL would be, and write the framework to support the DSL. 

Cheers,
Quentin.

 From: Quentin Mathé qma...@gmail.com
 To: Fundamentals of New Computing fonc@vpri.org
 Sent: Thu, July 28, 2011 12:32:53 PM
 Subject: Re: [fonc] HotDraw's Tool State Machine Editor
 
 Hi Alan,
 
 Le 25 juil. 2011 à 10:08, Alan Kay a écrit :
 
  I don't know of an another attempt to build a whole system with wide 
  properties in DSLs. But it wouldn't surprise me if there were some others 
  around. It requires more design effort, and the tools to make languages 
  need to be effective and as easy as possible, but the payoffs are worth it. 
  I was asked this question after the HPI talk: what about the Tower of 
  Babel from using DSLs -- isn't there a learning curve problem?
  
  My answer was: yes there is, but if you can get factors of 100s to 1000s of 
  decrease in size and increase in clarity, the tradeoff will be more like 
  you have to learn 7 languages, but then there are only a few hundred pages 
  of code in the whole system -- vs -- you only have to learn one language 
  but the system is 4 million pages of code, so you will never come close to 
  understanding it.
  
  (Hint: try to avoid poor language designs -- like perl etc. -- for your 
  DSLs ...)
  
  This is kind of a mathematics is a plural situation that we already have. 
  Maths are made up as DSLs to efficiently represent and allow thinking about 
  many different kinds of domains. One of the things one learns while 
  learning math is how to learn new representations.
  
  This used to be the case 50 years ago when most programming was done in 
  machine code. When I was a journeyman programmer at that time, I had to 
  learn 10 or 12 different instruction sets and macro-assembler systems for 
  the many different computers I had to program in the Air Force and then at 
  NCAR. We also had to learn a variety of mid-level languages such as 
  Fortran, COBOL, RPG, etc. This was thought of as no big deal back then, it 
  was just part of the process.
  
  So when people started talking in the 60s about POLs in research (Problem 
  Oriented Languages -- what are called DSLs today) this seemed like a very 
  good idea to most people (provided that you could get them to be efficient 
  enough). This led partly to Ted Steele's idea of an UNCOL (Universal 
  Computer Oriented Language) which was a relatively low-level target for 
  higher level languages whose back-end could be optimized just once for each 
  cpu. Historically, C wound up filling this role about 10 years later for 
  people who wanted a universal target with an optimizer attached.
  
  Overall, I would say that the biggest difficulties -- in general -- are 
  still the result of not knowing how to design each and every level of 
  software well enough.
 
 
 As you mention it it looks to me the really hard problem is the design and 
 how to push OOP to its boundaries. From this perspective, I'm not convinced 
 that DSLs are really critical.
 
 DSLs could matter more in the lower levels. For example, a DSL such as the 
 s-expression language described in 'PEG-based transformer provides front-, 
 middle and back-end stages in a simple compiler' seems very convincing, at 
 least the overall result is very impressive. I was able to understand an 
 entire non-trivial compiler for the first time in my life :-)
 But the closer we get to the user

Re: [fonc] HotDraw's Tool State Machine Editor

2011-07-29 Thread Wesley Smith
 I like to think about simplicity as coming up with the right core 
 abstractions and the optimal way to distribute complexity among them to 
 support a large set of use cases.


This phrase comes up so much when talking about computational systems
that I wonder if it can be made more tangible.  It would be really
interesting to see different sets of abstractions and some
representation of the computational space that they cover.

So far, the only material I've seen that might possibly be applied to
such an approach are things like Synthetic Topology, which from what I
understand is a generalization of topology from a category theory
perspective.  Has anyone here worked with the concepts of synthetic
topology?  Anyone actually understand it?

http://www.cs.bham.ac.uk/~mhe/papers/barbados.pdf

wes

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] HotDraw's Tool State Machine Editor

2011-07-29 Thread John Zabroski
On Fri, Jul 29, 2011 at 9:03 AM, Wesley Smith wesley.h...@gmail.com wrote:

  I like to think about simplicity as coming up with the right core
 abstractions and the optimal way to distribute complexity among them to
 support a large set of use cases.


 This phrase comes up so much when talking about computational systems
 that I wonder if it can be made more tangible.  It would be really
 interesting to see different sets of abstractions and some
 representation of the computational space that they cover.

 So far, the only material I've seen that might possibly be applied to
 such an approach are things like Synthetic Topology, which from what I
 understand is a generalization of topology from a category theory
 perspective.  Has anyone here worked with the concepts of synthetic
 topology?  Anyone actually understand it?

 http://www.cs.bham.ac.uk/~mhe/papers/barbados.pdf


We had a discussion on the FONC mailing list, around March 2010?, that
touched upon different ways of viewing complexity in a system.  One person
gave an example using two pictures from a famous system's theory book,
another argued for Gelman's effective complexity metric, and so on.  (I
think you may have replied to this discussion.)  There are many examples in
computer science where two different logics, algebras or calculi are
required to have a complete definition of a system's properties.  Axel
Jantsch has some interesting examples in a book of his I own.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] HotDraw's Tool State Machine Editor

2011-07-28 Thread Quentin Mathé
Hi Alan,

Le 25 juil. 2011 à 10:08, Alan Kay a écrit :

 I don't know of an another attempt to build a whole system with wide 
 properties in DSLs. But it wouldn't surprise me if there were some others 
 around. It requires more design effort, and the tools to make languages need 
 to be effective and as easy as possible, but the payoffs are worth it. I was 
 asked this question after the HPI talk: what about the Tower of Babel from 
 using DSLs -- isn't there a learning curve problem?
 
 My answer was: yes there is, but if you can get factors of 100s to 1000s of 
 decrease in size and increase in clarity, the tradeoff will be more like you 
 have to learn 7 languages, but then there are only a few hundred pages of 
 code in the whole system -- vs -- you only have to learn one language but the 
 system is 4 million pages of code, so you will never come close to 
 understanding it.
 
 (Hint: try to avoid poor language designs -- like perl etc. -- for your DSLs 
 ...)
 
 This is kind of a mathematics is a plural situation that we already have. 
 Maths are made up as DSLs to efficiently represent and allow thinking about 
 many different kinds of domains. One of the things one learns while learning 
 math is how to learn new representations.
 
 This used to be the case 50 years ago when most programming was done in 
 machine code. When I was a journeyman programmer at that time, I had to learn 
 10 or 12 different instruction sets and macro-assembler systems for the many 
 different computers I had to program in the Air Force and then at NCAR. We 
 also had to learn a variety of mid-level languages such as Fortran, COBOL, 
 RPG, etc. This was thought of as no big deal back then, it was just part of 
 the process.
 
 So when people started talking in the 60s about POLs in research (Problem 
 Oriented Languages -- what are called DSLs today) this seemed like a very 
 good idea to most people (provided that you could get them to be efficient 
 enough). This led partly to Ted Steele's idea of an UNCOL (Universal 
 Computer Oriented Language) which was a relatively low-level target for 
 higher level languages whose back-end could be optimized just once for each 
 cpu. Historically, C wound up filling this role about 10 years later for 
 people who wanted a universal target with an optimizer attached.
 
 Overall, I would say that the biggest difficulties -- in general -- are still 
 the result of not knowing how to design each and every level of software well 
 enough.


As you mention it it looks to me the really hard problem is the design and how 
to push OOP to its boundaries. From this perspective, I'm not convinced that 
DSLs are really critical.

DSLs could matter more in the lower levels. For example, a DSL such as the 
s-expression language described in 'PEG-based transformer provides front-, 
middle and back-end stages in a simple compiler' seems very convincing, at 
least the overall result is very impressive. I was able to understand an entire 
non-trivial compiler for the first time in my life :-)
But the closer we get to the user the less critical they seems to be imo. 

So I get the impression that STEPS could be written Smalltalk or some improved 
dialect with a marginal impact on the code base size.
Compared to a normal operating system that weights several millions loc, with 
an entirely rethought design but no DSLs, it might be possible to reduce the 
whole system to 100 000 or 50 000 loc. 
Then using DSLs would allow to compress the code a bit more and go down to 20 
000 loc, but the real gain would come from the new design approach rather than 
the DSLs. 

imo there is a tension between DSLs and frameworks/libraries. As a framework 
design is refined more and more, the more the framework stands as its own 
distinct language. When this point is reached where using a framework feels 
close to writing in a dedicated language, it's then relatively easy to add a 
DSL as syntactic sugar, but the expressivity or code compression gains seem 
then limited in most cases. If you implement the DSL earlier during the 
framework development, the gains can be more important, because the DSL will 
cover the framework design limitations, but these will probably manifest 
elsewhere at a later time.

To take a concrete example, what looks important in OMeta is the concept but 
not OMeta as a DSL. For instance, Newspeak executable grammars or PetitParser 
appear to do almost the same than OMeta but without a dedicated DSL.

So I'd be curious to know what yours take on this DSL vs framework issue.
I also wonder if you have studied how big would become Nile or some other STEPs 
subprojects using DSLs if they were rewritten in Smalltalk…

Cheers,
Quentin.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] HotDraw's Tool State Machine Editor

2011-07-28 Thread Alan Kay
Well, we don't absolutely *need* music notation, but it really helps many 
things. We don't *need* the various notations of mathematics (check out 
Newton's 
use of English for complex mathematical relationships in the Principia), but it 
really helps things.


I do think the hard problem is design and all that goes along with it (and 
this is true in music and math too). But that is not the end of it, nor is 
ignoring the design of visual representations that help grokking and thinking a 
good idea.

I think you are confusing/convolving the fact of being able to do something 
with 
the ease of it. This confusion is rampant in computer science 

Cheers,

Alan





From: Quentin Mathé qma...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org
Sent: Thu, July 28, 2011 12:32:53 PM
Subject: Re: [fonc] HotDraw's Tool State Machine Editor

Hi Alan,

Le 25 juil. 2011 à 10:08, Alan Kay a écrit :

 I don't know of an another attempt to build a whole system with wide 
 properties 
in DSLs. But it wouldn't surprise me if there were some others around. It 
requires more design effort, and the tools to make languages need to be 
effective and as easy as possible, but the payoffs are worth it. I was asked 
this question after the HPI talk: what about the Tower of Babel from using 
DSLs -- isn't there a learning curve problem?
 
 My answer was: yes there is, but if you can get factors of 100s to 1000s of 
decrease in size and increase in clarity, the tradeoff will be more like you 
have to learn 7 languages, but then there are only a few hundred pages of code 
in the whole system -- vs -- you only have to learn one language but the 
system 
is 4 million pages of code, so you will never come close to understanding it.
 
 (Hint: try to avoid poor language designs -- like perl etc. -- for your DSLs 
...)
 
 This is kind of a mathematics is a plural situation that we already have. 
Maths are made up as DSLs to efficiently represent and allow thinking about 
many 
different kinds of domains. One of the things one learns while learning math 
is 
how to learn new representations.
 
 This used to be the case 50 years ago when most programming was done in 
 machine 
code. When I was a journeyman programmer at that time, I had to learn 10 or 12 
different instruction sets and macro-assembler systems for the many different 
computers I had to program in the Air Force and then at NCAR. We also had to 
learn a variety of mid-level languages such as Fortran, COBOL, RPG, etc. This 
was thought of as no big deal back then, it was just part of the process.
 
 So when people started talking in the 60s about POLs in research (Problem 
Oriented Languages -- what are called DSLs today) this seemed like a very good 
idea to most people (provided that you could get them to be efficient enough). 
This led partly to Ted Steele's idea of an UNCOL (Universal Computer 
Oriented 
Language) which was a relatively low-level target for higher level languages 
whose back-end could be optimized just once for each cpu. Historically, C 
wound 
up filling this role about 10 years later for people who wanted a universal 
target with an optimizer attached.
 
 Overall, I would say that the biggest difficulties -- in general -- are still 
the result of not knowing how to design each and every level of software well 
enough.


As you mention it it looks to me the really hard problem is the design and how 
to push OOP to its boundaries. From this perspective, I'm not convinced that 
DSLs are really critical.

DSLs could matter more in the lower levels. For example, a DSL such as the 
s-expression language described in 'PEG-based transformer provides front-, 
middle and back-end stages in a simple compiler' seems very convincing, at 
least 
the overall result is very impressive. I was able to understand an entire 
non-trivial compiler for the first time in my life :-)
But the closer we get to the user the less critical they seems to be imo. 

So I get the impression that STEPS could be written Smalltalk or some improved 
dialect with a marginal impact on the code base size.
Compared to a normal operating system that weights several millions loc, with 
an 
entirely rethought design but no DSLs, it might be possible to reduce the whole 
system to 100 000 or 50 000 loc. 

Then using DSLs would allow to compress the code a bit more and go down to 20 
000 loc, but the real gain would come from the new design approach rather than 
the DSLs. 


imo there is a tension between DSLs and frameworks/libraries. As a framework 
design is refined more and more, the more the framework stands as its own 
distinct language. When this point is reached where using a framework feels 
close to writing in a dedicated language, it's then relatively easy to add a 
DSL 
as syntactic sugar, but the expressivity or code compression gains seem then 
limited in most cases. If you implement the DSL earlier during the framework 
development, the gains

Re: [fonc] HotDraw's Tool State Machine Editor

2011-07-28 Thread BGB

On 7/28/2011 9:57 AM, Alan Kay wrote:
Well, we don't absolutely *need* music notation, but it really helps 
many things. We don't *need* the various notations of mathematics 
(check out Newton's use of English for complex mathematical 
relationships in the Principia), but it really helps things.


I do think the hard problem is design and all that goes along with 
it (and this is true in music and math too). But that is not the end 
of it, nor is ignoring the design of visual representations that help 
grokking and thinking a good idea.


I think you are confusing/convolving the fact of being able to do 
something with the ease of it. This confusion is rampant in computer 
science 




yes, agreed...


(possible tangent time...).

even though many mainstream programs involve huge amounts of code, much 
of this code is written with relatively little thinking involved (one 
throws together some code in an ad-hoc matter, and for the next task 
that comes up, just throwing some more code out there, ...).


do this enough and one has a lot of code.

cleaner design, factoring things out, ... can help reduce the volume of 
code, but at a cost of potentially requiring far more thinking and 
mental effort to produce.



DSLs can also help reduce code because they, by their basic nature, 
factor out a lot of things (the whole domain thing), but the creation 
of a good DSL similarly involves a lot of mental factoring work, as well 
as the effort of going about creating it.



so, then a lot ends up boiling down to a large set of cost/benefit 
tradeoffs.



so, say, as a hypothetical example:
programmer A can partly turn-off their brain, and spew out a solution to 
a problem which is, say, 10 kloc in about a week;
programmer B then goes about thinking about it, and produces a more 
finely crafted 250 lines after about a month.


now, which is better?...

then, assume sometime later, the original developers are no longer 
around, and maintenance is needed (say, because requirements have 
changed, or new features were demanded by their superiors).


one may find that, although bigger, programmer A's code is generally 
easier to understand and modify (it just sort of meanders along and does 
its thing).


meanwhile, maybe programmer B's code is not so easy to understand, and 
will tend to blow up in the face of anyone who dares try to alter it.


now, which is better?...


a partial analogy could be like entropy from data compression, which 
would roughly correspond to the internal complexity of a system. making 
code bigger or smaller may not necessarily change its total complexity, 
but maybe only its relative density.


striving for simplicity can also help, but even simplicity can have costs:
sometimes, simplicity in one place may lead to much higher complexity 
somewhere else.


for example, simplicity at the lower-levels (towards the leaves of a 
dependency graph) tends to push complexity up the tree (towards the 
root of the tree).



for example, a person creates a very simplistic compiler IL, which then 
pushes the work onto the compiler upper-end writer;
the compiler writer doesn't want to deal with it, so then it is pushed 
onto the programmer;
the programmer is less happy having to worry about all these added edge 
cases, and so they want more pay;

...

then potentially, many levels of an organization are made less happy, 
..., mostly because someone near the bottom didn't want to add a number 
of sugar operations, and took it on faith that the level directly 
above them would cover for it.


so, simplification is not necessarily a cure-all either, rather, it is 
more necessary to try to figure out best what complexities belong where, 
in a goal to find the lowest overall costs.



for example:
is Java ByteCode fairly simple? I would say yes.
what about the JVM as a whole? I would say probably not.

for example, had the JVM used a much more powerful, if likely more 
complex, bytecode, it is possible now that its overall architectural 
complexity would have been lower.


but, then one may find that there are many different possibilities with 
differing tradeoffs, and possibly there is a lack of any ideal 
front-runner.


not that simplicity is a bad thing either though, just it is better to 
try to find a simple way to handle issues, rather than try to sweep them 
under the carpet or try to push them somewhere else.


or, at least, this is my thinking at the moment...



Cheers,

Alan


*From:* Quentin Mathé qma...@gmail.com
*To:* Fundamentals of New Computing fonc@vpri.org
*Sent:* Thu, July 28, 2011 12:32:53 PM
*Subject:* Re: [fonc] HotDraw's Tool State Machine Editor

Hi Alan,

Le 25 juil. 2011 à 10:08, Alan Kay a écrit :

 I don't know of an another attempt to build a whole system with wide 
properties in DSLs. But it wouldn't surprise me if there were some 
others around. It requires more design effort, and the tools to make

Re: [fonc] HotDraw's Tool State Machine Editor

2011-07-28 Thread Casey Ransberger
After I heard about the Wolfram Alpha integration, I decided to give it a shot. 
I didn't like the price tag. 

At the time I was interested in chemical simulations, and I didn't know of 
anything else that'd let me do them out of the box (without requiring me to 
already have a much greater command of chemistry than I do presently.)

I was on an A-Life kick. Was wondering if I might be able to find some kind 
simplified artificial chemistry that'd let me get an evolutionary process 
going at a (pseudo-) chemical level without leaving me starved for computing 
resources. 

I really dug the way Alpha would in some cases be able to supply me with 
m-expression representations of things. 

Kind of wish they'd just used Lisp for Mathematica, and gridding it up isn't 
cheap, so I ended up doing some weird art with it before abandoning it in favor 
of Squeak, which I can parallelize at the process level without trouble, and of 
course the green threads work well enough for some things too. 

I did really like the touch of HyperCard that was going on with it though. All 
in all, I haven't used it enough to justify the expense, and it takes too much 
space on the disk:( 

On Jul 28, 2011, at 12:14 PM, David Leibs david.le...@oracle.com wrote:

 Ah, powerful notations and the Principia.  These are some of my favorite 
 things. :-)
 
 An excellent example of a difficult subject made easier by a wonderful tool 
 is Mathematica. I think the Wolfram folks have done a really great job over 
 the last 25 years with Mathematica.  You can directly use math notation but 
 it is converted into the much more powerful internal representation of 
 Mathematica terms.  Just think of Mathematica terms as s-expressions and you 
 are close.  Mathematica's math notation system is built on a general system 
 that supports notations so that you could build a notation system that is 
 what chemists would want.
 
 The Mathematica Notebook is a wonderful system for exploration.  Notebooks 
 let you build beautiful publishable documents that typically also contain 
 live code for simulations and modeling.  Of course a Mathematica notebook is 
 just a bunch of Mathematica terms.  I wish web browsers were as good as 
 Mathematica notebooks.   They are like Smalltalk workspaces on steroids.  I 
 wish there was an  open source Mathematica notebook like 
 read-canonicalize-evaluate-present shell. 
 
 At the bottom of Mathematica is the Mathematica language which is a very 
 Lispy functional language with a gigantic library of primitives. There is 
 no Type Religion in Mathematica's functional programming.  The documentation 
 is extensive and made from Mathematica notebooks so that all examples are 
 both beautiful and executable.  The learning curve is very very high because 
 there is so much there but you can start simply just by evaluating 3+4 and 
 grow.  It's very open ended.
 
 The Mathematica language is also great for meta programming.  Last year my 
 son was working for the University of Colorado physics department building a 
 model of the interaction of the solar wind with the interstellar medium.  His 
 code made big use of meta programming to dramatically boost the performance.  
 He would partially evaluate his code/ math in the context of interesting 
 boundary conditions  then use Simplify to reduce the code then run it through 
 the low level Mathematica compiler.  He was able to get a 100x performance 
 boost this way.  Mathematica was his first programming language and he has 
 used it regularly for about 14 years .
 
 
 To give you a taste let's implement the most beautiful Newton's forward 
 difference algorithm  from the Principia.
 
 see:
   http://mathworld.wolfram.com/FiniteDifference.html
 
 for the background.
 The code below (hopefully not too mangled by blind email systems) repeatedly 
 takes differences
  until a zero is found resulting in a list of list of all differences.  The 
 first elements are then harvested and the last zero dropped.
  Now just make some parallel lists for that will get passed to the difference 
 term algorithm.  I could have written a loop
 but APL and Mathematica has taught me that Transpose and Apply, and a level 
 spec can write my loops for me without error.
 Once you have all the terms in a list just join them with Plus.  Finally run 
 the whole thing through Simplify.
 
 The differenceTerm and fallingFactorial helper functions should look familiar 
 to those who have played with modern functional programming.  Note I pass in 
 a symbol and the Mathematica rule reducing system naturally does partial 
 evaluation.  I realize I have left a lot of Mathematica details unexplained 
 but I am just trying to give a taste.
 
 Using the data from the mathworld web page you we can evaluate away
 
 praiseNewton[{1, 19, 143, 607, 1789, 4211, 8539}, n]
 
1+7 n+2 n^2+3 n^3+6 n^4
 
 Without the Simplify we would have gotten:
 
 1+18 n+53 (-1+n) n+39 (-2+n) (-1+n) n+6 (-3+n) (-2+n) (-1+n) n
 

Re: [fonc] HotDraw's Tool State Machine Editor

2011-07-26 Thread Chris Warburton
On Mon, 2011-07-25 at 20:17 -0400, John Zabroski wrote:
 On Mon, Jul 25, 2011 at 4:08 AM, Alan Kay alan.n...@yahoo.com wrote:
 
  So when people started talking in the 60s about POLs in research (Problem
  Oriented Languages -- what are called DSLs today) this seemed like a very
  good idea to most people (provided that you could get them to be efficient
  enough). This led partly to Ted Steele's idea of an UNCOL (Universal
  Computer Oriented Language) which was a relatively low-level target for
  higher level languages whose back-end could be optimized just once for each
  cpu. Historically, C wound up filling this role about 10 years later for
  people who wanted a universal target with an optimizer attached.
 
  Overall, I would say that the biggest difficulties -- in general -- are
  still the result of not knowing how to design each and every level of
  software well enough.
 
 
 
 Yet, it is well known that source-to-source compilation techniques are not
 really good for optimization, as documented in Kennedy and Allen's text on
 dependence-based graph optimizing compilers.  They summarize their huge
 mistakes in source-to-source experiments by noting the semantic information
 thrown away from stage-to-stage, such as structural information, could be
 kept and re-used to implement optimizations like explicit vectorization. [1]

This reminded me of a Lambda the Ultimate post from a while back, which
I believe is this one http://lambda-the-ultimate.org/node/3220

It discusses a 2-phase compilation technique: the first phase is a
non-destructive annotation of the high-level code, which finds as many
invariants, equalities, etc. as it can in the code and makes a note of
them. The second phase tries to make the most optimal transformations it
can, using various heuristics, based on the fully annotated tree. By
doing two passes, this prevents a lot of information being thrown away
by greedy optimisations.

My own thoughts WRT to this are to maintain such annotations in a
self-hosting system as part of the reflection/introspection information,
and doing it for both lower-level translation (compilation) and
higher-level/sideways translation (source-to-source translation), where
in the latter case the heuristics aren't optimisations but style hints,
such that. For example, myJavascriptFunction.asPython() would give
Python code equivalent to myJavascriptFunction, and the heuristics would
make it as pythonic as possible.

This of course doesn't solve the issue of lost information, but it does
tackle it in a limited way. It wouldn't solve migrating a codebase from
one language to another (although it may help), but it makes a decent
attempt at write-only translations (eg. compilation).

Thanks,
Chris Warburton


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] HotDraw's Tool State Machine Editor

2011-07-26 Thread Wesley Smith
 This could change in the future to be more general purpose.  For example,
 hardware-based computations using quaternions and octonions.  As far as I am
 aware, it isn't done today for purely mathematical reasons; no one knows
 how.  And as far as I'm aware, such a mathematical breakthrough would be
 huge, but not something graphics vendors would pursue/fund, since it is
 basic research that can't be patented and so all graphics processors would
 get the same speedup. [1]


Incidentally, this research has been going on for at least 10 years
already and has made significant progress in terms of compiler tools
and software systems that can be used for real-time systems.  In
Guadalajara, there's a robotics group led by Eduardo
Bayro-Corrochano[1] that makes amazing machines that perform their
computations in an intrinsically spatial manner using geometric
(Clifford) algebra.  One of the issues with this algebra is that the
dimensionality of the computational space grows combinatorially.  The
standard 3D conformal model (5D Minkowski space) is a 32 dimensional
multi-vector.  Fortunately, there's some really good software that
optimizes away the many redundancies and zero-ops called Gaigen[2],
which can handle up to 12D Clifford Algebras.  Geometric algebra
subsumes quaternions and adds a lot more interesting structures and
operations.

I don't think it requires basic research since it's just linear
algebra and easily maps to GPU hardware.  Plus the research has
already been done.  The software already exists for use if you want
it.  I'm really not sure what interest the manufacturers would have in
it though since their more specific applications than the general case
of GA lends itself to more optimization.  Also, there's an entire
world of mathematics that would have to be taught to everyone since
you aren't going to find courses in CS departments on this stuff
except in a handful of labs around the world (Netherlands, Saudia
Arabia, Mexico, Cambridge (in physics department) ...)


Here are some papers about GA and hardware:

using FPGAs: 
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.159.1691rep=rep1type=pdf
conformal collision detection on GPUs:
http://eduardoroa.3dsquash.com/portafolioeduv2/document/publications/SIACG_article.pdf

and there are others implementing generic GA ops in graphics hardware
that I wasn't able to find as quickly.


[1] http://www.gdl.cinvestav.mx/edb/
[2] http://staff.science.uva.nl/~fontijne/g25.html

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] HotDraw's Tool State Machine Editor

2011-07-26 Thread John Zabroski
I don't have time to delve into all that. ;-)

The value in quaternions is that they are a compact, direct representation
of a transformation matrix in 3D space, ergo seems ideally suited for 3D
graphics abstractions.  Technically, I suppose a software layer could do the
optimization and map it to SIMD coprocessors, but figuring out hardware that
could apply the same principles might result in even more speedup.  I don't
know of any algorithms with acceptable speed for quaternion multiplication,
though, so regardless of whether we use existing hardware or not, it comes
down to the discovery of such algorithms.  Does Gaigen implement such
effective algorithms?

I am not advanced enough in mathematics to know the benefits of geometric
algebra, especially to graphics hardware, and thus can't speculate.

I have noticed your previous postings about Geometric Algebra and do find it
interesting, but struggle with figuring out how to apply it.

On Tue, Jul 26, 2011 at 2:12 PM, Wesley Smith wesley.h...@gmail.com wrote:

  This could change in the future to be more general purpose.  For example,
  hardware-based computations using quaternions and octonions.  As far as I
 am
  aware, it isn't done today for purely mathematical reasons; no one knows
  how.  And as far as I'm aware, such a mathematical breakthrough would be
  huge, but not something graphics vendors would pursue/fund, since it is
  basic research that can't be patented and so all graphics processors
 would
  get the same speedup. [1]


 Incidentally, this research has been going on for at least 10 years
 already and has made significant progress in terms of compiler tools
 and software systems that can be used for real-time systems.  In
 Guadalajara, there's a robotics group led by Eduardo
 Bayro-Corrochano[1] that makes amazing machines that perform their
 computations in an intrinsically spatial manner using geometric
 (Clifford) algebra.  One of the issues with this algebra is that the
 dimensionality of the computational space grows combinatorially.  The
 standard 3D conformal model (5D Minkowski space) is a 32 dimensional
 multi-vector.  Fortunately, there's some really good software that
 optimizes away the many redundancies and zero-ops called Gaigen[2],
 which can handle up to 12D Clifford Algebras.  Geometric algebra
 subsumes quaternions and adds a lot more interesting structures and
 operations.

 I don't think it requires basic research since it's just linear
 algebra and easily maps to GPU hardware.  Plus the research has
 already been done.  The software already exists for use if you want
 it.  I'm really not sure what interest the manufacturers would have in
 it though since their more specific applications than the general case
 of GA lends itself to more optimization.  Also, there's an entire
 world of mathematics that would have to be taught to everyone since
 you aren't going to find courses in CS departments on this stuff
 except in a handful of labs around the world (Netherlands, Saudia
 Arabia, Mexico, Cambridge (in physics department) ...)


 Here are some papers about GA and hardware:

 using FPGAs:
 http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.159.1691rep=rep1type=pdf
 conformal collision detection on GPUs:

 http://eduardoroa.3dsquash.com/portafolioeduv2/document/publications/SIACG_article.pdf

 and there are others implementing generic GA ops in graphics hardware
 that I wasn't able to find as quickly.


 [1] http://www.gdl.cinvestav.mx/edb/
 [2] http://staff.science.uva.nl/~fontijne/g25.html

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] HotDraw's Tool State Machine Editor

2011-07-26 Thread Josh Gargus

On Jul 26, 2011, at 6:34 AM, John Zabroski wrote:

 
 
 On Tue, Jul 26, 2011 at 6:26 AM, Bert Freudenberg b...@freudenbergs.de 
 wrote:
 On 26.07.2011, at 02:17, John Zabroski wrote:
 
   99% of the chip space on GPUs these days is devoted to 3D, and chip space 
  for 2D primitives have shrunk expontentially in the last 15 years.
 
 Graphics hardware nowadays has (almost?) no fixed-function parts anymore. It 
 turned into a general-purpose SIMD coprocessor.
 
 - Bert -
 
 The state of the art in linear algebra is such that a general-purpose SIMD 
 coprocessor IS the hardware interface for 3D abstraction.

This doesn't make any sense to me.  Just because 3D today increasingly targets 
general-purpose SIMD coprocessors doesn't imply that those SIMD coprocessors 
are only suitable for 3D.  Unlike your original assertion, when doing 2D you'll 
still be using all of those SIMD elements (99% of the chip won't be idle).


 
 This could change in the future to be more general purpose.  For example, 
 hardware-based computations using quaternions and octonions.  As far as I am 
 aware, it isn't done today for purely mathematical reasons; no one knows how. 
  And as far as I'm aware, such a mathematical breakthrough would be huge, but 
 not something graphics vendors would pursue/fund, since it is basic 
 research that can't be patented and so all graphics processors would get the 
 same speedup. [1]

http://en.wikipedia.org/wiki/Quaternion

Given two quaternions, it's trivial to write a GPU program to compute eg: their 
Hamilton product.  I'm not sure what you mean by hardware-based 
quaternions... quaternions are an algebraic entity whose defining products are 
easily and naturally implementable on a SIMD.

Cheers,
Josh


 
 Cheers,
 Z-Bo
 
 [1] I'm not an expert in graphics, so this is just really punditry.
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] HotDraw's Tool State Machine Editor

2011-07-26 Thread Wesley Smith
 The value in quaternions is that they are a compact, direct representation
 of a transformation matrix in 3D space, ergo seems ideally suited for 3D
 graphics abstractions.  Technically, I suppose a software layer could do the
 optimization and map it to SIMD coprocessors, but figuring out hardware that
 could apply the same principles might result in even more speedup.  I don't
 know of any algorithms with acceptable speed for quaternion multiplication,
 though, so regardless of whether we use existing hardware or not, it comes
 down to the discovery of such algorithms.

Quaternion multiply is a basic linear algebra op.

In GLSL, it's:
vec4 qmul(vec4 q1, vec4 q2) {
return vec4(
q1.w*q2.x + q1.x*q2.w + q1.y*q2.z - q1.z*q2.y,
q1.w*q2.y - q1.x*q2.z + q1.y*q2.w + q1.z*q2.x,
q1.w*q2.z + q1.x*q2.y - q1.y*q2.x + q1.z*q2.w,
q1.w*q2.w - q1.x*q2.x - q1.y*q2.y - q1.z*q2.z
);
}


The usual OpenGL axis-angle representation is exactly equivalent to quaternions.


 Does Gaigen implement such
 effective algorithms?

Gaigen is an algebra compiler, translating the pure mathematical
constructs into efficient C/C++/JAVA/... code.  It works by defining
the particular algebraic structure (e.g. 2D euclidean metric,  5D
conformal model, etc.) and a list of named objects (point, vector,
plane, trivector, circle, point pair, ...) and the available operators
(geometric product, dual, exponentiation, ...) and derives an
efficient set of computational structures, which in C++ would be the
corresponding classes and operators.  It's quaternion multiplication
will be fast despite existing as a generic GA object because the
translation from abstract structure to code only computes what's
necessary.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] HotDraw's Tool State Machine Editor

2011-07-26 Thread Wesley Smith
please excuse the double reply here:

 I have noticed your previous postings about Geometric Algebra and do find it
 interesting, but struggle with figuring out how to apply it.

This is really an under explored area.  The best applications are
those that deal with inherently spatial tasks.  The best way to think
of GA is as a high-level spatial language where you can reason in
terms of points, lines, spheres, etc. and deduce computational systems
from them.  It's literally a spatial logic just like Boolean algebra
is a binary logic.

some apps: the robotics group I mentioned earlier embeds the spatial
structure/constraints/controls of the robot by linking together GA
objects in a chain that effectively forms a spatial reasoning system.
Other apps I've seen model the optics of various camera (catadioptric,
stereo, ...) to recover spatial information about a scene.  Other
people use it to analyze complex valued vector fields.  GA is also
extremely useful for calculation rigid body motions.  Every Euclidean
motion (rotation, dilation, transversion, reflection) can be expressed
in the form x' = VxV^-1 where V describes the transformation.  It
doesn't get any simpler than that.

wes

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] HotDraw's Tool State Machine Editor

2011-07-26 Thread John Zabroski
On Tue, Jul 26, 2011 at 3:26 PM, Josh Gargus j...@schwa.ca wrote:


 On Jul 26, 2011, at 6:34 AM, John Zabroski wrote:



 On Tue, Jul 26, 2011 at 6:26 AM, Bert Freudenberg b...@freudenbergs.dewrote:

 On 26.07.2011, at 02:17, John Zabroski wrote:

   99% of the chip space on GPUs these days is devoted to 3D, and chip
 space for 2D primitives have shrunk expontentially in the last 15 years.

 Graphics hardware nowadays has (almost?) no fixed-function parts anymore.
 It turned into a general-purpose SIMD coprocessor.

 - Bert -


 The state of the art in linear algebra is such that a general-purpose SIMD
 coprocessor IS the hardware interface for 3D abstraction.


 This doesn't make any sense to me.  Just because 3D today increasingly
 targets general-purpose SIMD coprocessors doesn't imply that those SIMD
 coprocessors are only suitable for 3D.  Unlike your original assertion, when
 doing 2D you'll still be using all of those SIMD elements (99% of the chip
 won't be idle).



Graphics hardware has continuously evolved from specific to general.  3D
is a generalization of 2D where the z plane is constant.  Libraries like SDL
still use hardware that has been generalized for better performance with 3D,
even if it does not support 3D abstractions.




 This could change in the future to be more general purpose.  For example,
 hardware-based computations using quaternions and octonions.  As far as I am
 aware, it isn't done today for purely mathematical reasons; no one knows
 how.  And as far as I'm aware, such a mathematical breakthrough would be
 huge, but not something graphics vendors would pursue/fund, since it is
 basic research that can't be patented and so all graphics processors would
 get the same speedup. [1]


 http://en.wikipedia.org/wiki/Quaternion

 Given two quaternions, it's trivial to write a GPU program to compute eg:
 their Hamilton product.  I'm not sure what you mean by hardware-based
 quaternions... quaternions are an algebraic entity whose defining products
 are easily and naturally implementable on a SIMD.



The hardware would support special acceleration (perhaps internally) for
linear transformations in 3D space.

Different hardware may not require a change for the programmer, but if the
programmer (or compiler assisting the programmer) can describe the structure
of the computation, then the hardware might be able to take advantage of
it.  The hardware might also be able to trace repeated executions and
dynamically discover structure from patterns of communication.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] HotDraw's Tool State Machine Editor

2011-07-25 Thread Alan Kay
Hi Benoît

I don't know of an another attempt to build a whole system with wide properties 
in DSLs. But it wouldn't surprise me if there were some others around. It 
requires more design effort, and the tools to make languages need to be 
effective and as easy as possible, but the payoffs are worth it. I was asked 
this question after the HPI talk: what about the Tower of Babel from using 
DSLs -- isn't there a learning curve problem?

My answer was: yes there is, but if you can get factors of 100s to 1000s of 
decrease in size and increase in clarity, the tradeoff will be more like you 
have to learn 7 languages, but then there are only a few hundred pages of code 
in the whole system -- vs -- you only have to learn one language but the system 
is 4 million pages of code, so you will never come close to understanding it.

(Hint: try to avoid poor language designs -- like perl etc. -- for your DSLs 
...)

This is kind of a mathematics is a plural situation that we already have. 
Maths are made up as DSLs to efficiently represent and allow thinking about 
many 
different kinds of domains. One of the things one learns while learning math is 
how to learn new representations.

This used to be the case 50 years ago when most programming was done in machine 
code. When I was a journeyman programmer at that time, I had to learn 10 or 12 
different instruction sets and macro-assembler systems for the many different 
computers I had to program in the Air Force and then at NCAR. We also had to 
learn a variety of mid-level languages such as Fortran, COBOL, RPG, etc. This 
was thought of as no big deal back then, it was just part of the process.

So when people started talking in the 60s about POLs in research (Problem 
Oriented Languages -- what are called DSLs today) this seemed like a very good 
idea to most people (provided that you could get them to be efficient enough). 
This led partly to Ted Steele's idea of an UNCOL (Universal Computer Oriented 
Language) which was a relatively low-level target for higher level languages 
whose back-end could be optimized just once for each cpu. Historically, C wound 
up filling this role about 10 years later for people who wanted a universal 
target with an optimizer attached.

Overall, I would say that the biggest difficulties -- in general -- are still 
the result of not knowing how to design each and every level of software well 
enough.

Cheers,

Alan






From: Benoît Fleury benoit.fle...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org
Sent: Sun, July 24, 2011 11:45:10 AM
Subject: Re: [fonc] HotDraw's Tool State Machine Editor

Hi Dr Kay,

thank you for the pointer to Newman's work I was not aware of.

Regarding the state machine, Engelbart already pointed out that it was
not a good model for user control language in [1].

In the first attempt, the control language was described as a finite
state machine, and the language allowed a formal textual definition of
such a machine. [...] It was originally thought that such an approach
was adequate for the definition of user-system control languages. But,
to paraphrase John McCarthy, the model is metaphysically adequate, but
epistemologically inadequate. Implementation revealed that the
dialogue is a non-Markovian (nonstochastic,
historically dependent) process on the part of both the machine and
the user, and accurate characterization as a finite state machine
results in so many states that the model is useless. A better model is
a two-stack automaton with a small number of immediate-access storage
registers.

I didn't encounter a lot of systems like NLS/AUGMENT during my time at
a french engineer school. I guess the situation is similar to US
universities. I'm trying now to catch up and was wondering if there
are other software systems built using the same principles and
techniques (collection of domain specific languages). I, of course,
know already about Franck and the STEPS project.

Thank you again for the pointers.

- Benoit

[1] Development of a multidisplay, time-shared computer facility and
computer-augmented management-system research (Final Report),
http://bitsavers.org/pdf/sri/arc/Development_of_a_Multidisplay_Time-Shared_Computer_Facility_Apr68.pdf




On Sat, Jul 23, 2011 at 11:39 PM, Alan Kay alan.n...@yahoo.com wrote:
 The idea of using a grammar to create a user interface goes back at least as
 far as Engelbart's AHI group. They used a distant past cousin of OMeta
 (called Tree Meta) to do this. Ca. 1966.

 One of the first systems to specify and make graphical grammars (and UIs)
 via user interactions was William Newman's The Reaction Handler PhD thesis
 about the same time. (William is the Newman of Newman and Sproull).

 It's worthwhile to contemplate that a state machine (recursive or not) is
 the opposite of modeless -- it is the epitome of modes. So this is not a
 great way to specify a really nice modeless interface (because you have to
 draw arrows

Re: [fonc] HotDraw's Tool State Machine Editor

2011-07-24 Thread Alan Kay
The idea of using a grammar to create a user interface goes back at least as 
far 
as Engelbart's AHI group. They used a distant past cousin of OMeta (called Tree 
Meta) to do this. Ca. 1966.

One of the first systems to specify and make graphical grammars (and UIs) via 
user interactions was William Newman's The Reaction Handler PhD thesis about 
the same time. (William is the Newman of Newman and Sproull).

It's worthwhile to contemplate that a state machine (recursive or not) is the 
opposite of modeless -- it is the epitome of modes. So this is not a great 
way 
to specify a really nice modeless interface (because you have to draw arrows 
outward from pretty much every state to pretty much every other state). 
Modeless at PARC meant you don't have to explicitly back out of your current 
'mode' to initiate any other command.

Cheers,

Alan





From: Benoît Fleury benoit.fle...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org
Sent: Sat, July 23, 2011 11:05:49 PM
Subject: [fonc] HotDraw's Tool State Machine Editor

Hi,

I found HotDraw's tool state machine editor [1] very interesting as a
graphical editor for a syntax-directed translator. The state machine
transforms a stream of mouse events into a stream of commands on the
structured drawing. Did I push the analogy too far?

I was wondering if anyone knows similar examples of graphical editor
for grammars?

Moreover, we didn't find (yet) a good metaphor for writing programs in
general purpose programming language in a graphical editor. Do you
think that might change with domain specific languages?

Thank you to everyone on this list for the very interesting
discussions and links.

- Benoit

[1] http://st-www.cs.illinois.edu/users/brant/HotDraw/Conversion.html

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] HotDraw's Tool State Machine Editor

2011-07-24 Thread Benoît Fleury
Hi Dr Kay,

thank you for the pointer to Newman's work I was not aware of.

Regarding the state machine, Engelbart already pointed out that it was
not a good model for user control language in [1].

In the first attempt, the control language was described as a finite
state machine, and the language allowed a formal textual definition of
such a machine. [...] It was originally thought that such an approach
was adequate for the definition of user-system control languages. But,
to paraphrase John McCarthy, the model is metaphysically adequate, but
epistemologically inadequate. Implementation revealed that the
dialogue is a non-Markovian (nonstochastic,
historically dependent) process on the part of both the machine and
the user, and accurate characterization as a finite state machine
results in so many states that the model is useless. A better model is
a two-stack automaton with a small number of immediate-access storage
registers.

I didn't encounter a lot of systems like NLS/AUGMENT during my time at
a french engineer school. I guess the situation is similar to US
universities. I'm trying now to catch up and was wondering if there
are other software systems built using the same principles and
techniques (collection of domain specific languages). I, of course,
know already about Franck and the STEPS project.

Thank you again for the pointers.

- Benoit

[1] Development of a multidisplay, time-shared computer facility and
computer-augmented management-system research (Final Report),
http://bitsavers.org/pdf/sri/arc/Development_of_a_Multidisplay_Time-Shared_Computer_Facility_Apr68.pdf



On Sat, Jul 23, 2011 at 11:39 PM, Alan Kay alan.n...@yahoo.com wrote:
 The idea of using a grammar to create a user interface goes back at least as
 far as Engelbart's AHI group. They used a distant past cousin of OMeta
 (called Tree Meta) to do this. Ca. 1966.

 One of the first systems to specify and make graphical grammars (and UIs)
 via user interactions was William Newman's The Reaction Handler PhD thesis
 about the same time. (William is the Newman of Newman and Sproull).

 It's worthwhile to contemplate that a state machine (recursive or not) is
 the opposite of modeless -- it is the epitome of modes. So this is not a
 great way to specify a really nice modeless interface (because you have to
 draw arrows outward from pretty much every state to pretty much every other
 state). Modeless at PARC meant you don't have to explicitly back out of
 your current 'mode' to initiate any other command.

 Cheers,

 Alan

 
 From: Benoît Fleury benoit.fle...@gmail.com
 To: Fundamentals of New Computing fonc@vpri.org
 Sent: Sat, July 23, 2011 11:05:49 PM
 Subject: [fonc] HotDraw's Tool State Machine Editor

 Hi,

 I found HotDraw's tool state machine editor [1] very interesting as a
 graphical editor for a syntax-directed translator. The state machine
 transforms a stream of mouse events into a stream of commands on the
 structured drawing. Did I push the analogy too far?

 I was wondering if anyone knows similar examples of graphical editor
 for grammars?

 Moreover, we didn't find (yet) a good metaphor for writing programs in
 general purpose programming language in a graphical editor. Do you
 think that might change with domain specific languages?

 Thank you to everyone on this list for the very interesting
 discussions and links.

 - Benoit

 [1] http://st-www.cs.illinois.edu/users/brant/HotDraw/Conversion.html

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc