Re: [fonc] Question about OMeta

2011-04-14 Thread nchen . dev
Hi 

Just to elaborate more on the previous post by Jamie :-)

It's not just me who's involved at UIUC. My adviser, Ralph Johnson and his two 
other students, Jeff and Munawar, are also involved. Jeff's research interests 
are on making it easier to create refactoring engines and he is our go-to 
person when we have questions about grammars and parsing. Munawar's research 
interests are about security-oriented transformations and he gives us different 
perspectives on where we might be able to apply the LoLs work.

Like Jamie said, if anyone has any questions, feel free to contact us.

--
Nick

On Apr 13, 2011, at 9:00 PM, Douglass, Jamie wrote:

 John,
 Language of Languages (LoLs) presented during the FlexiTools workshop at 
 SPLASH 2010 uses a CAT parser. CAT (which is now Contextual Attributed 
 Translator) is very similar to OMeta. It continues the work Alex Warth and I 
 wrote about Left Recursion with Pack Rat Parsers. The current version of CAT 
 has a simpler, more general and slightly faster left recursion support than 
 we had in the original paper (PEPM 2008). CAT, unlike Pack Rat Parsers, only 
 memoizes what it knows it will need again to avoid reparsing and only keeps 
 memos for as long as needed. This gives linear runtime performance without 
 the memory burden normally associated with Pack Rat parsing, and faster 
 individual parsing operations.
  
 LoLs uses language translation as a kind of superglue between multiple ways 
 to represent concepts based on context for various domains and languages 
 http://www.ics.uci.edu/~nlopezgi/flexitools/papers/douglass_flexitools_splash2010.pdf.
   During the FlexiTools workshop this style of translation was referred to as 
 “context on steroids”.
  
 Nicholas Chen at UIUC and I are working on the bootstrap version of LoLs with 
 CAT. We are hoping to have the initial  open source release this fall in time 
 for SPLASH 2011. I can share more about LoLs and CAT if folks are interested.
  
 Jamie
  
 From: Alan Kay [mailto:alan.n...@yahoo.com] 
 Sent: Tuesday, April 12, 2011 9:53 AM
 To: Fundamentals of New Computing; Douglass, Jamie
 Subject: Re: [fonc] Question about OMeta
  
 Hi John
 
 Alex Warth and Jamie Douglass co-wrote a paper on Pack Rat Parsers a few 
 years ago 
 
 I asked you because you like to poke around both in the present and in the 
 past.
 
 Cheers,
 
 Alan
  
 From: John Zabroski johnzabro...@gmail.com
 To: Fundamentals of New Computing fonc@vpri.org; jamie.dougl...@boeing.com
 Sent: Mon, April 11, 2011 8:21:06 PM
 Subject: Re: [fonc] Question about OMeta
 
 On Sat, Apr 9, 2011 at 12:09 AM, Alan Kay alan.n...@yahoo.com wrote:
 But now you are adding some side conditions :)
 
 For example, if you want comparable or even better abstractions in the target 
 language, then there is a lot more work that has to be done (and I don't know 
 of a great system that has ever been able to do this well e.g. to go from an 
 understandable low level meaning to really nice code using the best 
 abstractions of the target language). Maybe John Z knows?
 
 Alan,
 
 There was a guy at SPLASH 2010 that was talking about wanting to build such a 
 system.  I think he was a researcher at Boeing, but he came across as so 
 practically minded that I thought he was a programmer just like me.
 
 I don't know why you thought I specifically would have any ideas on this... 
 but... 
 
 Tell me your thoughts on 
 http://www.ics.uci.edu/~nlopezgi/flexitools/papers/douglass_flexitools_splash2010.pdf
 
 I am surprised you didn't mention this above since he uses Squeak for the 
 bootstrap.  I suggested at SPLASH that he contact you (VPRI, really), 
 especially when you consider how close by you are.
 
 As for UNCOL, I have Sammet's book on programming and there are some really 
 interesting conferences from the 1950s that are covered in the 
 preface/disclaimer.  Well, at least I think it's the book that mentions it.  
 Either way I couldn't easily look up these references or find proceedings 
 from conferences in the 1950s.
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


RE: [fonc] Question about OMeta

2011-04-13 Thread Douglass, Jamie
John,
Language of Languages (LoLs) presented during the FlexiTools workshop at SPLASH 
2010 uses a CAT parser. CAT (which is now Contextual Attributed Translator) is 
very similar to OMeta. It continues the work Alex Warth and I wrote about Left 
Recursion with Pack Rat Parsers. The current version of CAT has a simpler, more 
general and slightly faster left recursion support than we had in the original 
paper (PEPM 2008). CAT, unlike Pack Rat Parsers, only memoizes what it knows it 
will need again to avoid reparsing and only keeps memos for as long as needed. 
This gives linear runtime performance without the memory burden normally 
associated with Pack Rat parsing, and faster individual parsing operations.

LoLs uses language translation as a kind of superglue between multiple ways to 
represent concepts based on context for various domains and languages 
http://www.ics.uci.edu/~nlopezgi/flexitools/papers/douglass_flexitools_splash2010.pdfhttp://www.ics.uci.edu/%7Enlopezgi/flexitools/papers/douglass_flexitools_splash2010.pdf.
  During the FlexiTools workshop this style of translation was referred to as 
context on steroids.

Nicholas Chen at UIUC and I are working on the bootstrap version of LoLs with 
CAT. We are hoping to have the initial  open source release this fall in time 
for SPLASH 2011. I can share more about LoLs and CAT if folks are interested.

Jamie

From: Alan Kay [mailto:alan.n...@yahoo.com]mailto:[mailto:alan.n...@yahoo.com]
Sent: Tuesday, April 12, 2011 9:53 AM
To: Fundamentals of New Computing; Douglass, Jamie
Subject: Re: [fonc] Question about OMeta

Hi John

Alex Warth and Jamie Douglass co-wrote a paper on Pack Rat Parsers a few 
years ago 

I asked you because you like to poke around both in the present and in the past.

Cheers,

Alan


From: John Zabroski johnzabro...@gmail.commailto:johnzabro...@gmail.com
To: Fundamentals of New Computing fonc@vpri.orgmailto:fonc@vpri.org; 
jamie.dougl...@boeing.commailto:jamie.dougl...@boeing.com
Sent: Mon, April 11, 2011 8:21:06 PM
Subject: Re: [fonc] Question about OMeta
On Sat, Apr 9, 2011 at 12:09 AM, Alan Kay 
alan.n...@yahoo.commailto:alan.n...@yahoo.com wrote:
But now you are adding some side conditions :)

For example, if you want comparable or even better abstractions in the target 
language, then there is a lot more work that has to be done (and I don't know 
of a great system that has ever been able to do this well e.g. to go from an 
understandable low level meaning to really nice code using the best 
abstractions of the target language). Maybe John Z knows?

Alan,

There was a guy at SPLASH 2010 that was talking about wanting to build such a 
system.  I think he was a researcher at Boeing, but he came across as so 
practically minded that I thought he was a programmer just like me.

I don't know why you thought I specifically would have any ideas on this... 
but...

Tell me your thoughts on 
http://www.ics.uci.edu/~nlopezgi/flexitools/papers/douglass_flexitools_splash2010.pdfhttp://www.ics.uci.edu/%7Enlopezgi/flexitools/papers/douglass_flexitools_splash2010.pdf

I am surprised you didn't mention this above since he uses Squeak for the 
bootstrap.  I suggested at SPLASH that he contact you (VPRI, really), 
especially when you consider how close by you are.

As for UNCOL, I have Sammet's book on programming and there are some really 
interesting conferences from the 1950s that are covered in the 
preface/disclaimer.  Well, at least I think it's the book that mentions it.  
Either way I couldn't easily look up these references or find proceedings from 
conferences in the 1950s.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Question about OMeta

2011-04-12 Thread Alan Kay
Hi John

Alex Warth and Jamie Douglass co-wrote a paper on Pack Rat Parsers a few 
years 
ago 

I asked you because you like to poke around both in the present and in the past.

Cheers,

Alan





From: John Zabroski johnzabro...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org; jamie.dougl...@boeing.com
Sent: Mon, April 11, 2011 8:21:06 PM
Subject: Re: [fonc] Question about OMeta




On Sat, Apr 9, 2011 at 12:09 AM, Alan Kay alan.n...@yahoo.com wrote:

But now you are adding some side conditions :)

For example, if you want comparable or even better abstractions in the target 
language, then there is a lot more work that has to be done (and I don't know 
of 
a great system that has ever been able to do this well e.g. to go from an 
understandable low level meaning to really nice code using the best 
abstractions 
of the target language). Maybe John Z knows?


Alan,

There was a guy at SPLASH 2010 that was talking about wanting to build such a 
system.  I think he was a researcher at Boeing, but he came across as so 
practically minded that I thought he was a programmer just like me. 


I don't know why you thought I specifically would have any ideas on this... 
but... 


Tell me your thoughts on 
http://www.ics.uci.edu/~nlopezgi/flexitools/papers/douglass_flexitools_splash2010.pdf


I am surprised you didn't mention this above since he uses Squeak for the 
bootstrap.  I suggested at SPLASH that he contact you (VPRI, really), 
especially 
when you consider how close by you are.

As for UNCOL, I have Sammet's book on programming and there are some really 
interesting conferences from the 1950s that are covered in the 
preface/disclaimer.  Well, at least I think it's the book that mentions it.  
Either way I couldn't easily look up these references or find proceedings from 
conferences in the 1950s.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Question about OMeta

2011-04-12 Thread Alan Kay
Not a real theory yet but ..

If both sides of the negotiation implemented very simple working models of what 
they do, then a combination of matching and discovery could establish a 
probability of matchup. This mimics what a programmer would do. One would 
like 
to have a broker that can find possible resources and then perform some 
negotiation experiments to help find the interoperabilities.

Cheers,

Alan





From: John Zabroski johnzabro...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org
Sent: Mon, April 11, 2011 9:27:54 PM
Subject: Re: [fonc] Question about OMeta




On Sat, Apr 9, 2011 at 12:09 AM, Alan Kay alan.n...@yahoo.com wrote:

The larger problems will require something like negotiation between modules 
(this idea goes back to some of the agent ideas at PARC, and was partially 
catalyzed by the AM and Eurisko work by Doug Lenat).


Separate thread of thought:

Some rather successful designs do recursive negotiation for request resolution. 
 
I gave an HTTP example on LtU awhile back [1], explaining why REST is such a 
good design for an Interpreter pattern to handle very large-scale systems.  I 
also link it to the best solution to Wadler's Expression Problem that I've seen 
yet (and, according to Wadler, the best he's seen in Haskell [2]; the reader 
comments there are pretty good as well): Data Types a la Carte.

Also, Sameer Sundresh recently completed his Ph.D. thesis, Request-Based 
Mediated Execution [3], under Jose' Meseguer.  I spoke with him about how 
broadly applicable I felt his ideas were, but we seemed to part views on the 
best practical demonstrations for his work.

For example, Sameer is now a founder at Djangy which provides cloud hosting for 
Django apps.  He thought that the ideas in his thesis we good building blocks 
for automatically sandboxing system resources, such as in a multi-tenancy cloud 
app.  I disagreed, since I would prefer a system built from first principles 
using an ocaps system.  What I meant was that his good example would become 
obsolete in 50 years, and so I was pushing for examples that I thought would be 
timeless.  I suggested an Object-Relational Mapper architecture built using 
this 
sort of recursive negotiation, since it doesn't work that way today in any ORM 
implementation and would emphasize the biggest feature of his thesis: Giving 
the 
power to the programmer, rather than the language's interpreter.

But a big challenge is figuring out how to verify this sort of 
call-by-intention 
is correct.

[1] http://lambda-the-ultimate.org/node/3846#comment-57350
[2] http://wadler.blogspot.com/2008/02/data-types-la-carte.html
[3] 
http://www-osl.cs.uiuc.edu/docs/sundresh-dissertation-2009/sundresh-dissertation-2009.pdf___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Question about OMeta

2011-04-12 Thread Casey Ransberger
I'm not sure I grok what we mean by inter-module negotiation. Can anyone give 
me some pointers to prior work? I will look at the paper that Mr. Zabroski 
suggested. 

On Apr 12, 2011, at 10:08 AM, Alan Kay alan.n...@yahoo.com wrote:

 Not a real theory yet but ..
 
 If both sides of the negotiation implemented very simple working models of 
 what they do, then a combination of matching and discovery could establish 
 a probability of matchup. This mimics what a programmer would do. One would 
 like to have a broker that can find possible resources and then perform some 
 negotiation experiments to help find the interoperabilities.
 
 Cheers,
 
 Alan
 
 From: John Zabroski johnzabro...@gmail.com
 To: Fundamentals of New Computing fonc@vpri.org
 Sent: Mon, April 11, 2011 9:27:54 PM
 Subject: Re: [fonc] Question about OMeta
 
 
 
 On Sat, Apr 9, 2011 at 12:09 AM, Alan Kay alan.n...@yahoo.com wrote:
 The larger problems will require something like negotiation between modules 
 (this idea goes back to some of the agent ideas at PARC, and was partially 
 catalyzed by the AM and Eurisko work by Doug Lenat).
 
 Separate thread of thought:
 
 Some rather successful designs do recursive negotiation for request 
 resolution.  I gave an HTTP example on LtU awhile back [1], explaining why 
 REST is such a good design for an Interpreter pattern to handle very 
 large-scale systems.  I also link it to the best solution to Wadler's 
 Expression Problem that I've seen yet (and, according to Wadler, the best 
 he's seen in Haskell [2]; the reader comments there are pretty good as well): 
 Data Types a la Carte.
 
 Also, Sameer Sundresh recently completed his Ph.D. thesis, Request-Based 
 Mediated Execution [3], under Jose' Meseguer.  I spoke with him about how 
 broadly applicable I felt his ideas were, but we seemed to part views on the 
 best practical demonstrations for his work.
 
 For example, Sameer is now a founder at Djangy which provides cloud hosting 
 for Django apps.  He thought that the ideas in his thesis we good building 
 blocks for automatically sandboxing system resources, such as in a 
 multi-tenancy cloud app.  I disagreed, since I would prefer a system built 
 from first principles using an ocaps system.  What I meant was that his good 
 example would become obsolete in 50 years, and so I was pushing for examples 
 that I thought would be timeless.  I suggested an Object-Relational Mapper 
 architecture built using this sort of recursive negotiation, since it doesn't 
 work that way today in any ORM implementation and would emphasize the biggest 
 feature of his thesis: Giving the power to the programmer, rather than the 
 language's interpreter.
 
 But a big challenge is figuring out how to verify this sort of 
 call-by-intention is correct.
 
 [1] http://lambda-the-ultimate.org/node/3846#comment-57350
 [2] http://wadler.blogspot.com/2008/02/data-types-la-carte.html
 [3] 
 http://www-osl.cs.uiuc.edu/docs/sundresh-dissertation-2009/sundresh-dissertation-2009.pdf
 
 
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Question about OMeta

2011-04-11 Thread John Zabroski
Thanks Julian.

I covered Migrations above. See reference [4].

I would view migrations as a way to encapsulate formed meanings.

Something that has always struck me as funny about the NoSQL movement
is the complaints about how much of a PITA they find schema versioning
in an RDBMS. I've never served millions of pages a day, but I've
always been deeply suspicious of the cause-effect relationship
described by the engineers at the biggest sites. Put simply, it
doesn't make any sense.  It's a bad correlation. The next big
architectural step forward will rectify this with a better
correlation.

On 4/10/11, Julian Leviston jul...@leviston.net wrote:
 You should probably have a look at ActiveRecord::Migration which is part of
 Rails if you're interested in SQL-based systems, and in fact ActiveRecord in
 general is a really wonderful abstraction system - and a very good mix of
 do what you *can* in a programming-language based DSL, and what you can't
 in direct SQL.

 http://api.rubyonrails.org/classes/ActiveRecord/Migration.html

 Julian.

 On 11/04/2011, at 2:58 AM, John Nilsson wrote:

 Wow, thanks. This will keep me occupied for a while ;-)

 Regarding AI completness and the quest for automation. In my mind its
 better to start with making it simpler for humans to do, and just keep
 making it simpler until you can remove the human element. This way you
 can put something out there much quicker and get feedback on what
 works, and what doesn't. Most importantly something out there for
 other developers (people smarter than me) to extend and improve.

 Regarding the database I have some ideas of experimenting with a
 persistent approach to data storage, employing a scheme with
 branching and human assisted merging to handle evolution of the
 system. For a fast moving target such as a typical OLTP-system there
 must be automated merging of course, but I see no reason for the
 algorithms to be either general or completley automatic. I think we
 can safley assume that there will be some competend people around to
 assist with the merging. Afterall there must be humans involved to
 trigger the evolution to being with.

 To solve the impedance missmatch between the dynamic world of the
 databse and the static world of application development I'm thinking
 the best approach is to simply remove it. Why have a static,
 unconnected, version of the application at all? After all code is
 data, and data belongs in the database.

 To have any hope of getting any kind of prototype out there I have,
 for now, decided to avoid thinking of distributed systems and/or
 untrusted systemcomponents. I guess this will be a 3.0 concern ;-)

 BR,
 John

 On Sun, Apr 10, 2011 at 4:40 PM, John Zabroski johnzabro...@gmail.com
 wrote:
 John,

 Disagree it is a simple thing, but it is a good example.

 It also demonstrates blending well, since analogies are used all the
 time in this domain to circumvent impedance mismatches.

 For example, versioning very large database systems' schema is
 non-trivial since the default methods don't scale:
 alter table BigTable add /*column*/ foo int

 This will lock out all readers and writers until it completes.
 Effectively it is a denial of service attack. Predicting its
 completion time is difficult, since it will depend on how the table
 was previously built (e.g. if anything fancy was done storing sparse
 columns; if there is still storage space available in-row to store the
 int required by this new column thus avoiding a complete rebuild; if
 the table needs to be completely rebuilt, then so do its indices; if
 the table is sharded across many independent disks, then the storage
 engine can parallelize the task). The *intention* is to add a column
 to a table, presumably for some new requirement. But there is a latent
 requirement on the intention, forming a new meaning, that nobody
 should observe a delay during the schema upgrade.

 Now, if the default method isn't robust enough, then What is? and What
 do we call it?

 Well, what I did to solve this problem was type in how to add a
 column to a large table into Google [1].

 As for naming it, well, the enterprise software community came up with
 this concept called Database Refactorings [2] [3] or simply
 Migrations [4], which are a heuristic system for approximating the
 Holy Grail of having a reversible logic for schema operations
 (generally difficult due to destructive changes and other problems).
  Programmers procedurally embed knowledge on how to change the schema,
 and then just pass messages to a server that has all of this
 procedural knowledge embedded in it. It i interesting (to me, anyway)
 that programmers have developed a human process for working around a
 complex theoretical problem (e.g., see [5] for a discussion of the
 challenges in building a lingua franca for data integration, schema
 evolution, database design, and model management), without ever
 knowing the problems.  Good designers realize there is a structural
 

Re: [fonc] Question about OMeta

2011-04-11 Thread John Zabroski
On Sat, Apr 9, 2011 at 12:09 AM, Alan Kay alan.n...@yahoo.com wrote:

 But now you are adding some side conditions :)

 For example, if you want comparable or even better abstractions in the
 target language, then there is a lot more work that has to be done (and I
 don't know of a great system that has ever been able to do this well e.g. to
 go from an understandable low level meaning to really nice code using the
 best abstractions of the target language). Maybe John Z knows?


Alan,

There was a guy at SPLASH 2010 that was talking about wanting to build such
a system.  I think he was a researcher at Boeing, but he came across as so
practically minded that I thought he was a programmer just like me.

I don't know why you thought I specifically would have any ideas on this...
but...

Tell me your thoughts on
http://www.ics.uci.edu/~nlopezgi/flexitools/papers/douglass_flexitools_splash2010.pdf

I am surprised you didn't mention this above since he uses Squeak for the
bootstrap.  I suggested at SPLASH that he contact you (VPRI, really),
especially when you consider how close by you are.

As for UNCOL, I have Sammet's book on programming and there are some really
interesting conferences from the 1950s that are covered in the
preface/disclaimer.  Well, at least I think it's the book that mentions it.
Either way I couldn't easily look up these references or find proceedings
from conferences in the 1950s.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Question about OMeta

2011-04-11 Thread John Zabroski
On Sat, Apr 9, 2011 at 12:09 AM, Alan Kay alan.n...@yahoo.com wrote:

 The larger problems will require something like negotiation between
 modules (this idea goes back to some of the agent ideas at PARC, and was
 partially catalyzed by the AM and Eurisko work by Doug Lenat).


Separate thread of thought:

Some rather successful designs do recursive negotiation for request
resolution.  I gave an HTTP example on LtU awhile back [1], explaining why
REST is such a good design for an Interpreter pattern to handle very
large-scale systems.  I also link it to the best solution to Wadler's
Expression Problem that I've seen yet (and, according to Wadler, the best
he's seen in Haskell [2]; the reader comments there are pretty good as
well): Data Types a la Carte.

Also, Sameer Sundresh recently completed his Ph.D. thesis, Request-Based
Mediated Execution [3], under Jose' Meseguer.  I spoke with him about how
broadly applicable I felt his ideas were, but we seemed to part views on the
best practical demonstrations for his work.

For example, Sameer is now a founder at Djangy which provides cloud hosting
for Django apps.  He thought that the ideas in his thesis we good building
blocks for automatically sandboxing system resources, such as in a
multi-tenancy cloud app.  I disagreed, since I would prefer a system built
from first principles using an ocaps system.  What I meant was that his
good example would become obsolete in 50 years, and so I was pushing for
examples that I thought would be timeless.  I suggested an Object-Relational
Mapper architecture built using this sort of recursive negotiation, since it
doesn't work that way today in any ORM implementation and would emphasize
the biggest feature of his thesis: Giving the power to the programmer,
rather than the language's interpreter.

But a big challenge is figuring out how to verify this sort of
call-by-intention is correct.

[1] http://lambda-the-ultimate.org/node/3846#comment-57350
[2] http://wadler.blogspot.com/2008/02/data-types-la-carte.html
[3]
http://www-osl.cs.uiuc.edu/docs/sundresh-dissertation-2009/sundresh-dissertation-2009.pdf
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Question about OMeta

2011-04-10 Thread John Zabroski
John,

Disagree it is a simple thing, but it is a good example.

It also demonstrates blending well, since analogies are used all the
time in this domain to circumvent impedance mismatches.

For example, versioning very large database systems' schema is
non-trivial since the default methods don't scale:
alter table BigTable add /*column*/ foo int

This will lock out all readers and writers until it completes.
Effectively it is a denial of service attack. Predicting its
completion time is difficult, since it will depend on how the table
was previously built (e.g. if anything fancy was done storing sparse
columns; if there is still storage space available in-row to store the
int required by this new column thus avoiding a complete rebuild; if
the table needs to be completely rebuilt, then so do its indices; if
the table is sharded across many independent disks, then the storage
engine can parallelize the task). The *intention* is to add a column
to a table, presumably for some new requirement. But there is a latent
requirement on the intention, forming a new meaning, that nobody
should observe a delay during the schema upgrade.

Now, if the default method isn't robust enough, then What is? and What
do we call it?

Well, what I did to solve this problem was type in how to add a
column to a large table into Google [1].

As for naming it, well, the enterprise software community came up with
this concept called Database Refactorings [2] [3] or simply
Migrations [4], which are a heuristic system for approximating the
Holy Grail of having a reversible logic for schema operations
(generally difficult due to destructive changes and other problems).
 Programmers procedurally embed knowledge on how to change the schema,
and then just pass messages to a server that has all of this
procedural knowledge embedded in it. It i interesting (to me, anyway)
that programmers have developed a human process for working around a
complex theoretical problem (e.g., see [5] for a discussion of the
challenges in building a lingua franca for data integration, schema
evolution, database design, and model management), without ever
knowing the problems.  Good designers realize there is a structural
problem and create some structure and encapsulate the process for
solving it.  Schema matching in general is considered AI-Complete
since it believed to require reproducing human intelligence to do it
automatically [6], and so some approaches even take a cognitive
learning approach [7].

But can we do even better than this conceptualization?  For example,
at what point does an engineer decide a RDBMS is the wrong tool for
the job and switches to a NoSQL database like Redis?  If we can
identify that point, we can also perhaps predict if that trade-off was
indeed a good one.  Was the engineer simply following a pop culture
phenomenon or did he/she make a genuinely good choice?

Beyond that, another related example implicit in your referential
integrity example is dynamically federated, dynamically distributed
system design.  In the general case, we know from the CAP Theorem that
due to partition barriers we cannot guarantee referential integrity
while also having high availability and performance.  We also can't
implicitly trust the Java client code due to out-of-band communication
protocol attacks, e,g, imagine a SQL injection attack.  Likewise, we
might wish to re-use validation logic in multiple places, such as in
an HTML form, and it is not sufficient to depend on the HTML form's
JavaScript validation logic, since JavaScript can be disabled and the
browser can be bypassed completely using raw encoding of HTTP PUT/POST
form actions and sending that directly to the server.

Food for thought.

[1] http://www.google.com/search?q=how+to+add+a+column+to+a+large+table
[2] http://martinfowler.com/articles/evodb.html
[3] http://databaserefactoring.com/
[4] http://guides.rubyonrails.org/migrations.html
[5] http://www.mecs-press.org/ijigsp/ijigsp-200901007.pdf
[6] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.134.6252
[7] 
http://z-bo.tumblr.com/post/454811730/learning-to-map-between-structured-representations-of

On 4/10/11, John Nilsson j...@milsson.nu wrote:
 Hello John,

 Thanks for the pointers, I will indeed have a look at this.

 I have a pet project of mine trying to create a platform and
 prgramming model to handle this kind of problem. Such a simple thing
 as keeping referential integritey between static Java code, the
 embedded SQL and over to the dynamic databas is one of those
 irritating problems I intend to address with this approach.

 I enviosion a system with a meta-language and some standard
 transformation to editor views, compilation stages and type-systems,
 implemented in terms of this meta-language.

 BR,
 John

 On Sun, Apr 10, 2011 at 4:38 AM, John Zabroski johnzabro...@gmail.com
 wrote:
 John,

 It is true you can't know exact intention but that hasn't stopped
 computer scientists from trying to answer the question. 

Re: [fonc] Question about OMeta

2011-04-10 Thread John Nilsson
Wow, thanks. This will keep me occupied for a while ;-)

Regarding AI completness and the quest for automation. In my mind its
better to start with making it simpler for humans to do, and just keep
making it simpler until you can remove the human element. This way you
can put something out there much quicker and get feedback on what
works, and what doesn't. Most importantly something out there for
other developers (people smarter than me) to extend and improve.

Regarding the database I have some ideas of experimenting with a
persistent approach to data storage, employing a scheme with
branching and human assisted merging to handle evolution of the
system. For a fast moving target such as a typical OLTP-system there
must be automated merging of course, but I see no reason for the
algorithms to be either general or completley automatic. I think we
can safley assume that there will be some competend people around to
assist with the merging. Afterall there must be humans involved to
trigger the evolution to being with.

To solve the impedance missmatch between the dynamic world of the
databse and the static world of application development I'm thinking
the best approach is to simply remove it. Why have a static,
unconnected, version of the application at all? After all code is
data, and data belongs in the database.

To have any hope of getting any kind of prototype out there I have,
for now, decided to avoid thinking of distributed systems and/or
untrusted systemcomponents. I guess this will be a 3.0 concern ;-)

BR,
John

On Sun, Apr 10, 2011 at 4:40 PM, John Zabroski johnzabro...@gmail.com wrote:
 John,

 Disagree it is a simple thing, but it is a good example.

 It also demonstrates blending well, since analogies are used all the
 time in this domain to circumvent impedance mismatches.

 For example, versioning very large database systems' schema is
 non-trivial since the default methods don't scale:
 alter table BigTable add /*column*/ foo int

 This will lock out all readers and writers until it completes.
 Effectively it is a denial of service attack. Predicting its
 completion time is difficult, since it will depend on how the table
 was previously built (e.g. if anything fancy was done storing sparse
 columns; if there is still storage space available in-row to store the
 int required by this new column thus avoiding a complete rebuild; if
 the table needs to be completely rebuilt, then so do its indices; if
 the table is sharded across many independent disks, then the storage
 engine can parallelize the task). The *intention* is to add a column
 to a table, presumably for some new requirement. But there is a latent
 requirement on the intention, forming a new meaning, that nobody
 should observe a delay during the schema upgrade.

 Now, if the default method isn't robust enough, then What is? and What
 do we call it?

 Well, what I did to solve this problem was type in how to add a
 column to a large table into Google [1].

 As for naming it, well, the enterprise software community came up with
 this concept called Database Refactorings [2] [3] or simply
 Migrations [4], which are a heuristic system for approximating the
 Holy Grail of having a reversible logic for schema operations
 (generally difficult due to destructive changes and other problems).
  Programmers procedurally embed knowledge on how to change the schema,
 and then just pass messages to a server that has all of this
 procedural knowledge embedded in it. It i interesting (to me, anyway)
 that programmers have developed a human process for working around a
 complex theoretical problem (e.g., see [5] for a discussion of the
 challenges in building a lingua franca for data integration, schema
 evolution, database design, and model management), without ever
 knowing the problems.  Good designers realize there is a structural
 problem and create some structure and encapsulate the process for
 solving it.  Schema matching in general is considered AI-Complete
 since it believed to require reproducing human intelligence to do it
 automatically [6], and so some approaches even take a cognitive
 learning approach [7].

 But can we do even better than this conceptualization?  For example,
 at what point does an engineer decide a RDBMS is the wrong tool for
 the job and switches to a NoSQL database like Redis?  If we can
 identify that point, we can also perhaps predict if that trade-off was
 indeed a good one.  Was the engineer simply following a pop culture
 phenomenon or did he/she make a genuinely good choice?

 Beyond that, another related example implicit in your referential
 integrity example is dynamically federated, dynamically distributed
 system design.  In the general case, we know from the CAP Theorem that
 due to partition barriers we cannot guarantee referential integrity
 while also having high availability and performance.  We also can't
 implicitly trust the Java client code due to out-of-band communication
 protocol attacks, 

Re: [fonc] Question about OMeta

2011-04-10 Thread Julian Leviston
You should probably have a look at ActiveRecord::Migration which is part of 
Rails if you're interested in SQL-based systems, and in fact ActiveRecord in 
general is a really wonderful abstraction system - and a very good mix of do 
what you *can* in a programming-language based DSL, and what you can't in 
direct SQL.

http://api.rubyonrails.org/classes/ActiveRecord/Migration.html

Julian.

On 11/04/2011, at 2:58 AM, John Nilsson wrote:

 Wow, thanks. This will keep me occupied for a while ;-)
 
 Regarding AI completness and the quest for automation. In my mind its
 better to start with making it simpler for humans to do, and just keep
 making it simpler until you can remove the human element. This way you
 can put something out there much quicker and get feedback on what
 works, and what doesn't. Most importantly something out there for
 other developers (people smarter than me) to extend and improve.
 
 Regarding the database I have some ideas of experimenting with a
 persistent approach to data storage, employing a scheme with
 branching and human assisted merging to handle evolution of the
 system. For a fast moving target such as a typical OLTP-system there
 must be automated merging of course, but I see no reason for the
 algorithms to be either general or completley automatic. I think we
 can safley assume that there will be some competend people around to
 assist with the merging. Afterall there must be humans involved to
 trigger the evolution to being with.
 
 To solve the impedance missmatch between the dynamic world of the
 databse and the static world of application development I'm thinking
 the best approach is to simply remove it. Why have a static,
 unconnected, version of the application at all? After all code is
 data, and data belongs in the database.
 
 To have any hope of getting any kind of prototype out there I have,
 for now, decided to avoid thinking of distributed systems and/or
 untrusted systemcomponents. I guess this will be a 3.0 concern ;-)
 
 BR,
 John
 
 On Sun, Apr 10, 2011 at 4:40 PM, John Zabroski johnzabro...@gmail.com wrote:
 John,
 
 Disagree it is a simple thing, but it is a good example.
 
 It also demonstrates blending well, since analogies are used all the
 time in this domain to circumvent impedance mismatches.
 
 For example, versioning very large database systems' schema is
 non-trivial since the default methods don't scale:
 alter table BigTable add /*column*/ foo int
 
 This will lock out all readers and writers until it completes.
 Effectively it is a denial of service attack. Predicting its
 completion time is difficult, since it will depend on how the table
 was previously built (e.g. if anything fancy was done storing sparse
 columns; if there is still storage space available in-row to store the
 int required by this new column thus avoiding a complete rebuild; if
 the table needs to be completely rebuilt, then so do its indices; if
 the table is sharded across many independent disks, then the storage
 engine can parallelize the task). The *intention* is to add a column
 to a table, presumably for some new requirement. But there is a latent
 requirement on the intention, forming a new meaning, that nobody
 should observe a delay during the schema upgrade.
 
 Now, if the default method isn't robust enough, then What is? and What
 do we call it?
 
 Well, what I did to solve this problem was type in how to add a
 column to a large table into Google [1].
 
 As for naming it, well, the enterprise software community came up with
 this concept called Database Refactorings [2] [3] or simply
 Migrations [4], which are a heuristic system for approximating the
 Holy Grail of having a reversible logic for schema operations
 (generally difficult due to destructive changes and other problems).
  Programmers procedurally embed knowledge on how to change the schema,
 and then just pass messages to a server that has all of this
 procedural knowledge embedded in it. It i interesting (to me, anyway)
 that programmers have developed a human process for working around a
 complex theoretical problem (e.g., see [5] for a discussion of the
 challenges in building a lingua franca for data integration, schema
 evolution, database design, and model management), without ever
 knowing the problems.  Good designers realize there is a structural
 problem and create some structure and encapsulate the process for
 solving it.  Schema matching in general is considered AI-Complete
 since it believed to require reproducing human intelligence to do it
 automatically [6], and so some approaches even take a cognitive
 learning approach [7].
 
 But can we do even better than this conceptualization?  For example,
 at what point does an engineer decide a RDBMS is the wrong tool for
 the job and switches to a NoSQL database like Redis?  If we can
 identify that point, we can also perhaps predict if that trade-off was
 indeed a good one.  Was the engineer simply following a pop culture
 phenomenon 

Re: [fonc] Question about OMeta

2011-04-08 Thread Casey Ransberger
So if I wanted to translate a Java application to C# (which ought to be pretty 
trivial, given the similarity,) what would I do about the libraries? Or the 
native interfaces?

It seems like a lot of the semantics of modern (read: industrial 60s/70s tech) 
programs really live in libraries written in lower modes of abstraction, over 
FFI interfaces, etc. I doubt it's as easy to translate this stuff as it is core 
application code (plumbing, usually, which usually delivers the various 
electronic effluvia between the libraries in use.)

I wonder what the folks here might suggest? Is there a fifth corner in the room 
that I'm not turning?

I'd really like to be able to look at a program in the language of my 
choosing... I just don't know how useful that is when I find out that #foo 
happens to use an FFI over to something written in assembler and running on an 
emulator. It sounds ridiculous, but I never run out of ridiculous in my way 
doing this stuff for a living:)

I think about rebuilding the world in a way that keeps algorithms in a repo, 
a la Mathematica. Pure algorithms/functions, math really, seem to be easier in 
some cases to compose than classes/inheritance/etc (am I wrong? I could be 
wrong here.)

I don't see a way to do anything like this without first burning the disk 
packs, which is a bummer, because if there was a really workable way to 
translate large applications, I know some folks with COBOL apps who might have 
interesting work for me (I'm a sucker for old systems. It's like digging up an 
odd ceramic pot in the back yard and wondering who left it there, when, why. 
Technological archeology and such. I'm also a sucker for shiny new technology 
like OMeta, so I picture gobs of fun.) 

Fortunately I have some of the best people in the world hard at work on burning 
my disk packs! Thanks VPRI:) Can't wait to dig into Frank and see what's there. 
Huge fan of HyperCard, so I'm really pleased to see the direction it's taking. 

On Apr 8, 2011, at 2:46 PM, Alan Kay alan.n...@yahoo.com wrote:

 It does that all the time. An easy way to do it is to make up a universal 
 semantics, perhaps in AST form, then write translators into and out of.
 
 Cheers,
 
 Alan
 
 From: Julian Leviston jul...@leviston.net
 To: Fundamentals of New Computing fonc@vpri.org
 Sent: Fri, April 8, 2011 7:24:28 AM
 Subject: [fonc] Question about OMeta
 
 I have a question about OMeta.
 
 Could it be used in any way to efficiently translate programs between 
 languages? I've been thinking about this for a number of months now... and it 
 strikes me that it should be possible...?
 
 Julian.
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Question about OMeta

2011-04-08 Thread Julian Leviston
Surely if the translation is efficient, then you can simply translate 
everything (libraries, too) down to a sub-machine machine code... which 
wouldn't take too much space - in fact it'd probably take less space than 
existing compiled libraries AND their documentation.

... maybe we could call this layer Nothing... ? ;-)

Julian.

On 09/04/2011, at 10:10 AM, Casey Ransberger wrote:

 So if I wanted to translate a Java application to C# (which ought to be 
 pretty trivial, given the similarity,) what would I do about the libraries? 
 Or the native interfaces?
 
 It seems like a lot of the semantics of modern (read: industrial 60s/70s 
 tech) programs really live in libraries written in lower modes of 
 abstraction, over FFI interfaces, etc. I doubt it's as easy to translate this 
 stuff as it is core application code (plumbing, usually, which usually 
 delivers the various electronic effluvia between the libraries in use.)
 
 I wonder what the folks here might suggest? Is there a fifth corner in the 
 room that I'm not turning?
 
 I'd really like to be able to look at a program in the language of my 
 choosing... I just don't know how useful that is when I find out that #foo 
 happens to use an FFI over to something written in assembler and running on 
 an emulator. It sounds ridiculous, but I never run out of ridiculous in my 
 way doing this stuff for a living:)
 
 I think about rebuilding the world in a way that keeps algorithms in a 
 repo, a la Mathematica. Pure algorithms/functions, math really, seem to be 
 easier in some cases to compose than classes/inheritance/etc (am I wrong? I 
 could be wrong here.)
 
 I don't see a way to do anything like this without first burning the disk 
 packs, which is a bummer, because if there was a really workable way to 
 translate large applications, I know some folks with COBOL apps who might 
 have interesting work for me (I'm a sucker for old systems. It's like digging 
 up an odd ceramic pot in the back yard and wondering who left it there, when, 
 why. Technological archeology and such. I'm also a sucker for shiny new 
 technology like OMeta, so I picture gobs of fun.) 
 
 Fortunately I have some of the best people in the world hard at work on 
 burning my disk packs! Thanks VPRI:) Can't wait to dig into Frank and see 
 what's there. Huge fan of HyperCard, so I'm really pleased to see the 
 direction it's taking. 
 
 On Apr 8, 2011, at 2:46 PM, Alan Kay alan.n...@yahoo.com wrote:
 
 It does that all the time. An easy way to do it is to make up a universal 
 semantics, perhaps in AST form, then write translators into and out of.
 
 Cheers,
 
 Alan
 
 From: Julian Leviston jul...@leviston.net
 To: Fundamentals of New Computing fonc@vpri.org
 Sent: Fri, April 8, 2011 7:24:28 AM
 Subject: [fonc] Question about OMeta
 
 I have a question about OMeta.
 
 Could it be used in any way to efficiently translate programs between 
 languages? I've been thinking about this for a number of months now... and 
 it strikes me that it should be possible...?
 
 Julian.
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Question about OMeta

2011-04-08 Thread Julian Leviston
Thanks for responding to my stupid question. :-)

OMeta is quite simple, which makes it very very difficult for me to think about 
sometimes (often!) :)

That's pretty fricking awesome... because it obviously means you just have to 
do two translations to get all the existing translations to and from other 
languages for free... including compilers and interpreters.

I guess this is why Frank is so potentially amazingly awesome, right? It's 
built on this idea :)

I also really like the idea that you're not just throwing away all the 
existing stuff in the process of readdressing these extremely base level 
concerns... because it's obvious that that doesn't work.

You guys rock :) I just wanna take this opportunity to give thanks that there 
are still people like you who are continuing this sort of things for the good 
of us all.

Julian.

On 09/04/2011, at 7:46 AM, Alan Kay wrote:

 It does that all the time. An easy way to do it is to make up a universal 
 semantics, perhaps in AST form, then write translators into and out of.
 
 Cheers,
 
 Alan
 
 From: Julian Leviston jul...@leviston.net
 To: Fundamentals of New Computing fonc@vpri.org
 Sent: Fri, April 8, 2011 7:24:28 AM
 Subject: [fonc] Question about OMeta
 
 I have a question about OMeta.
 
 Could it be used in any way to efficiently translate programs between 
 languages? I've been thinking about this for a number of months now... and it 
 strikes me that it should be possible...?
 
 Julian.
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Question about OMeta

2011-04-08 Thread Alan Kay
But now you are adding some side conditions :)

For example, if you want comparable or even better abstractions in the target 
language, then there is a lot more work that has to be done (and I don't know 
of 
a great system that has ever been able to do this well e.g. to go from an 
understandable low level meaning to really nice code using the best 
abstractions 
of the target language). Maybe John Z knows?

Most schemes retain meaning but lose expressibility through translation, and it 
is hard to get this back.

The FFI library problem is actually one of the problems discussed in the STEPS 
proposal and project. Some possible solutions have been suggested (I think 
several have real merit -- semantic typing, matching of needs and resources, 
etc.), but nothing substantive has been done). The larger problems will require 
something like negotiation between modules (this idea goes back to some of 
the 
agent ideas at PARC, and was partially catalyzed by the AM and Eurisko work by 
Doug Lenat).

I think it's time to revisit some of these ideas.

Cheers,

Alan





From: Casey Ransberger casey.obrie...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org
Sent: Fri, April 8, 2011 5:10:29 PM
Subject: Re: [fonc] Question about OMeta


So if I wanted to translate a Java application to C# (which ought to be pretty 
trivial, given the similarity,) what would I do about the libraries? Or the 
native interfaces?

It seems like a lot of the semantics of modern (read: industrial 60s/70s tech) 
programs really live in libraries written in lower modes of abstraction, over 
FFI interfaces, etc. I doubt it's as easy to translate this stuff as it is core 
application code (plumbing, usually, which usually delivers the various 
electronic effluvia between the libraries in use.)

I wonder what the folks here might suggest? Is there a fifth corner in the room 
that I'm not turning?

I'd really like to be able to look at a program in the language of my 
choosing... I just don't know how useful that is when I find out that #foo 
happens to use an FFI over to something written in assembler and running on an 
emulator. It sounds ridiculous, but I never run out of ridiculous in my way 
doing this stuff for a living:)

I think about rebuilding the world in a way that keeps algorithms in a repo, 
a 
la Mathematica. Pure algorithms/functions, math really, seem to be easier in 
some cases to compose than classes/inheritance/etc (am I wrong? I could be 
wrong 
here.)

I don't see a way to do anything like this without first burning the disk 
packs, 
which is a bummer, because if there was a really workable way to translate 
large 
applications, I know some folks with COBOL apps who might have interesting work 
for me (I'm a sucker for old systems. It's like digging up an odd ceramic pot 
in 
the back yard and wondering who left it there, when, why. Technological 
archeology and such. I'm also a sucker for shiny new technology like OMeta, so 
I 
picture gobs of fun.) 


Fortunately I have some of the best people in the world hard at work on burning 
my disk packs! Thanks VPRI:) Can't wait to dig into Frank and see what's there. 
Huge fan of HyperCard, so I'm really pleased to see the direction it's taking. 

On Apr 8, 2011, at 2:46 PM, Alan Kay alan.n...@yahoo.com wrote:


It does that all the time. An easy way to do it is to make up a universal 
semantics, perhaps in AST form, then write translators into and out of.

Cheers,

Alan





From: Julian Leviston jul...@leviston.net
To: Fundamentals of New Computing fonc@vpri.org
Sent: Fri, April 8, 2011 7:24:28 AM
Subject: [fonc] Question about OMeta

I have a question about OMeta.

Could it be used in any way to efficiently translate programs between 
languages? 
I've been thinking about this for a number of months now... and it strikes me 
that it should be possible...?

Julian.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Question about OMeta

2011-04-08 Thread Alan Kay
This isn't our idea, but was a favorite topic in the 60s, and was championed by 
Ted Steel, who proposed than an UNCOL (UNiversal Computer Oriented Language) 
which could be the intermediary in all translations, especially where the end 
target was machine code.

As is often the case, something accidental happened that wasn't as good -- 
namely C. And is often the case, people only interested in short term goals 
started using C and the larger idea of UNCOL never happened.

In some of the other correspondence, the loss of expressibility through 
translation is mentioned. UNCOL also had this problem. I think quite a bit of 
work by an expert system has to be added to something like OMeta in order to 
both retain expressibility, recover it, and generate it (when the target is 
more 
expressive than the source).

(And Frank isn't awesome yet, but we have achieved a small measure of scary ...)

Cheers,

Alan





From: Julian Leviston jul...@leviston.net
To: Fundamentals of New Computing fonc@vpri.org
Sent: Fri, April 8, 2011 8:56:48 PM
Subject: Re: [fonc] Question about OMeta

Thanks for responding to my stupid question. :-)

OMeta is quite simple, which makes it very very difficult for me to think about 
sometimes (often!) :)

That's pretty fricking awesome... because it obviously means you just have to 
do 
two translations to get all the existing translations to and from other 
languages for free... including compilers and interpreters.

I guess this is why Frank is so potentially amazingly awesome, right? It's 
built 
on this idea :)

I also really like the idea that you're not just throwing away all the 
existing stuff in the process of readdressing these extremely base level 
concerns... because it's obvious that that doesn't work.

You guys rock :) I just wanna take this opportunity to give thanks that there 
are still people like you who are continuing this sort of things for the good 
of 
us all.

Julian.


On 09/04/2011, at 7:46 AM, Alan Kay wrote:

It does that all the time. An easy way to do it is to make up a universal 
semantics, perhaps in AST form, then write translators into and out of.

Cheers,

Alan





From: Julian Leviston jul...@leviston.net
To: Fundamentals of New Computing fonc@vpri.org
Sent: Fri, April 8, 2011 7:24:28 AM
Subject: [fonc] Question about OMeta

I have a question about OMeta.

Could it be used in any way to efficiently translate programs between 
languages? 
I've been thinking about this for a number of months now... and it strikes me 
that it should be possible...?

Julian.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc