Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-19 Thread Paul D. Fernhout

On 10/15/10 11:52 AM, John Zabroski wrote:

If you want great Design Principles for the Web, read (a) M.A. Padlipsky's
book The Elements of Networking Style [2] (b) Radia Perlman's book
Interconnections [3] (c) Roy Fielding's Ph.d. Thesis [4]


While not exactly about the web, I just saw this video yesterday (video link 
is the View Webinar button to the right):

  Less is More: Redefining the “I” of the IDE (W-JAX Keynote)
  http://live.eclipse.org/node/676
Not long ago the notion of a tool that hides more of the program than it 
shows sounded crazy. To some it probably still does. But as Mylyn continues 
its rapid adoption, hundreds of thousands of developers are already part of 
the next big step in the evolution of the IDE. Tasks are more important than 
files, focus is more important than features, and an explicit context is the 
biggest productivity boost since code completion. This talk discusses how 
Java, OSGi, Eclipse, Mylyn, and a combination of open source frameworks and 
commercial extensions like Tasktop have enabled this transformation. It then 
reviews lessons learned for the next generation of tool innovations, and 
looks ahead at how we are redefining the “I” of the IDE.


But, the funny thing about that video is that it is, essentially about how 
the Eclipse and Java communities have reinvented the Smalltalk-80 work 
environment without admitting it or even recognizing it. :-)


Even tasks were represented in Smalltalk in the 1980s and 1990s as 
projects able to enter worlds of windows (and to a lesser extent, 
workspaces as a manual collection of related evaluable commands).


I have to admit things now are bigger and better in various ways 
(including security sandboxing, the spread of these ideas to cheap hardware, 
and what Mylyn does at the desktop level with the TaskTop extensions), so I 
don't want to take that away from recent innovations or the presenter's 
ongoing work. But it is all so surreal to someone who has been using 
computers for about 30 years and knows about Smalltalk. :-)


By the way, on what Tim Berners-Lee may miss about network design?
  Meshworks, Hierarchies, and Interfaces by Manuel De Landa
  http://www.t0.or.at/delanda/meshwork.htm
Indeed, one must resist the temptation to make hierarchies into villains 
and meshworks into heroes, not only because, as I said, they are constantly 
turning into one another, but because in real life we find only mixtures and 
hybrids, and the properties of these cannot be established through theory 
alone but demand concrete experimentation.


So, that interweaves with an idea like a principle of least power. Still, I 
think Tim Berners-Lee makes a lot of good points. But how design patterns 
combine in practice is a complex issue. :-) Manuel De Landa's point there is 
so insightful though, because it says, sometimes, yes, there is some value 
to a hierarchies or a standardizations, but that value interacts with the 
value of meshworks in potentially unexpected ways that require experiment to 
work through.


--Paul Fernhout
http://www.pdfernhout.net/

The biggest challenge of the 21st century is the irony of technologies of 
abundance in the hands of those thinking in terms of scarcity.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-15 Thread John Zabroski
On Sun, Oct 10, 2010 at 9:01 AM, Leo Richard Comerford 
leocomerf...@gmail.com wrote:


 On 10 October 2010 01:44, John Zabroski johnzabro...@gmail.com wrote:

  To be fair, Tim had the right idea with a Uri.

 He also had a right idea with the Principle of Least Power. Thesis,
 antithesis...



You're referring to [1], Tim Berners-Lee's Axioms of Web architecture.

This is such a shallow document and mixes basic principles up in horrible
ways, and he doesn't formalize things mathematically that need to be
formalized mathematically to truly understand them.  He was NOT right.  If
anything, this document sounds like he is parroting somebody else without
understanding it.

For example, his section on Tolerance is parroting The Robustness Principle,
which is generally considered to be Wrong.  There is also a better way to
achieve Tolerance in an object-oriented system: Request-based Mediated
Execution or Feature-Oriented Programming where clients and servers
negotiate the feature set dynamically.  His example of the HTML data format
is ridiculous and stupid, and doesn't make sense to a mathematician:
Allowing implicit conversions between documents written for HTML4 Strict and
HTML4 Transitional is just one of the many stupid ideas the W3C has had.

What about his strange assertion that SQL is Turing Complete?  What is Tim
even saying?  He also says Java is unashamedly procedural despite the
fact, as I quoted earlier in this thread, he admitted at JavaOne to not
knowing much about Java.  Did he suddenly learn it within the year timespan
he wrote this letter from the time he gave that talk?  I suppose that's
possible.

He also writes Computer Science in the 1960s to 80s spent a lot of effort
making languages which were as powerful as possible. Nowadays we have to
appreciate the reasons for picking not the most powerful solution but the
least powerful.  Nowadays we have to appreciate the reasons for picking not
the most powerful solution but the least powerful. The reason for this is
that the less powerful the language, the more you can do with the data
stored in that language.  I don't think Tim actually understands the issues
at play here, judging by his complete non-understanding of what Turing-power
is (cf. SQL having Turing-power, ANSI SQL didn't even get transitive
closures until SQL-99).  There is more to data abstraction that data
interchange formats.  Even SQL expert CJ Date has said so: stuff like XML
simply solves a many-to-many versioning problem among data interchange
formats by standardizing on a single format, thus allowing higher-level
issues like semantics to override syntactic issues.  This is basic compiler
design: Create a uniform IR (Intermediate Representation) and map things
into that IR.  Tim's verbose explanations with technical inaccuracies only
confuse the issue.

Besides, as far as the Princilpe of Least Power goes, the best example he
could give is the one that Roy Fielding provides about the design of HTTP:
Hypermedia As The Engine of Application State.

If you want great Design Principles for the Web, read (a) M.A. Padlipsky's
book The Elements of Networking Style [2] (b) Radia Perlman's book
Interconnections [3] (c) Roy Fielding's Ph.d. Thesis [4]

Mike Padlipsky actually helped build the ARPANET and supposedly Dijkstra
gave Mike permission to use GOTO if he deemed it necessary in a networking
protocol implementation, Radia Perlman is considered as having written one
of the best dissertations on distributed systems, and Roy Fielding defined
the architectural style REST.

[1] http://www.w3.org/DesignIssues/Principles.html
[2]
http://www.amazon.com/Elements-Networking-Style-Animadversions-Intercomputer/dp/0595088791/
[3]
http://www.amazon.com/Interconnections-Bridges-Switches-Internetworking-Protocols/dp/0201634481/
[4] http://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-14 Thread Steve Dekorte

I have to wonder how things might be different if someone had made a tiny, 
free, scriptable Smalltalk for unix before Perl appeared...

BTW, there were rumors that Sun considered using Smalltalk in browsers instead 
of Java but the license fees from the vendors were too high. Anyone know if 
that's true?

On 2010-10-08 Fri, at 11:28 AM, John Zabroski wrote:
 Why are we stuck with such poor architecture?


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-14 Thread John Zabroski
I saw Paul Fernhout mention this once on /.
http://developers.slashdot.org/comments.pl?sid=1578224cid=31429692

He linked to: http://fargoagile.com/joomla/content/view/15/26/

which references:

http://lists.squeakfoundation.org/pipermail/squeak-dev/2006-December/112337.html

which states:

When I became V.P. of Development at ParcPlace-Digitalk in 1996, Bill
 Lyons (then CEO) told me the same story about Sun and VW. According
 to Bill, at some point in the early '90's when Adele was still CEO,
 Sun approached ParcPlace for a license to use VW (probably
 ObjectWorks at the time) in some set top box project they were
 working on. Sun wanted to use a commercially viable OO language with
 a proven track record. At the time ParcPlace was licensing Smalltalk
 for $100 a copy. Given the volume that Sun was quoting, PP gave Sun
 a firm quote on the order of $100/copy. Sun was willing to pay at
 most $9-10/copy for the Smalltalk licenses. Sun was not willing to go
 higher and PP was unwilling to go lower, so nothing ever happened and
 Sun went its own way with its own internally developed language
 (Oak...Java). The initial development of Oak might well have predated
 the discussions between Sun and PP, but it was PP's unwillingness to
 go lower on the price of Smalltalk that gave Oak its green light
 within Sun (according to Bill anyway). Bill went on to lament that
 had PP played its cards right, Smalltalk would have been the language
 used by Sun and the language that would have ruled the Internet.
 Obviously, you can take that with a grain of salt. I don't know if
 Bill's story to me was true (he certainly seemed to think it was),
 but it might be confirmable by Adele. If it is true, it is merely
 another sad story of what might have been and how close Smalltalk
 might have come to universal acceptance.

 -Eric Clayberg


That being said, I have no idea why people think Smalltalk-80 would have
been uniformly better than Java.  I am not saying this to be negative.  In
my view, much of the biggest mistakes with Java were requiring insane legacy
compatibility, and doing it in really bad ways.  Swing should have never
have been forced to reuse AWT, for example.  And AWT should never have had a
concrete component model, thus forcing Swing to inherit it (dropping the
rabbit ears, because I see no good explanation for why it had to inherit
AWT's component model via implementation inheritance).  It's hard for me
to even guage if the Swing developers were good programmers or not, given
that ridiculously stupid constraint.  It's not like Swing even supported
phones, it was never in J2ME.  The best I can conclude is that they were not
domain experts, but who really was at the time?

On Thu, Oct 14, 2010 at 6:14 PM, Steve Dekorte st...@dekorte.com wrote:


 I have to wonder how things might be different if someone had made a tiny,
 free, scriptable Smalltalk for unix before Perl appeared...

 BTW, there were rumors that Sun considered using Smalltalk in browsers
 instead of Java but the license fees from the vendors were too high. Anyone
 know if that's true?

 On 2010-10-08 Fri, at 11:28 AM, John Zabroski wrote:
  Why are we stuck with such poor architecture?


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-14 Thread Pascal J. Bourguignon


On 2010/10/15, at 00:14 , Steve Dekorte wrote:



I have to wonder how things might be different if someone had made a  
tiny, free, scriptable Smalltalk for unix before Perl appeared...


There has been GNU smalltalk for a long time, AFAIR before perl, which  
was quite adapted to the unix environment.


It would certainly qualify as tiny since it lacked any big GUI  
framework, obviously it is free in all meanings of the words, and it  
is best in writing scripts.



My point is that it hasn't changed anything and nothing else would have.


BTW, there were rumors that Sun considered using Smalltalk in  
browsers instead of Java but the license fees from the vendors were  
too high. Anyone know if that's true?


No idea, but since they invented Java, they could have at a much lower  
cost written their own implementation of Smalltalk.


--
__Pascal Bourguignon__
http://www.informatimago.com




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-14 Thread Duncan Mak
On Thu, Oct 14, 2010 at 6:51 PM, John Zabroski johnzabro...@gmail.comwrote:

 That being said, I have no idea why people think Smalltalk-80 would have
 been uniformly better than Java.  I am not saying this to be negative.  In
 my view, much of the biggest mistakes with Java were requiring insane legacy
 compatibility, and doing it in really bad ways.  Swing should have never
 have been forced to reuse AWT, for example.  And AWT should never have had a
 concrete component model, thus forcing Swing to inherit it (dropping the
 rabbit ears, because I see no good explanation for why it had to inherit
 AWT's component model via implementation inheritance).  It's hard for me
 to even guage if the Swing developers were good programmers or not, given
 that ridiculously stupid constraint.  It's not like Swing even supported
 phones, it was never in J2ME.  The best I can conclude is that they were not
 domain experts, but who really was at the time?


I started programming Swing a year ago and spent a little time learning its
history when I first started. I was able to gather a few anecdotes, and they
have fascinated me.

There were two working next-generation Java GUI toolkits at the time of
Swing's conception - Netscape's IFC and Lighthouse Design's LFC - both
toolkits were developed by ex-NeXT developers and borrowed heavily from
AppKit's design. IFC even had a design tool that mimicd Interface Builder
(which still lives on today in Cocoa).

Sun first acquired Lighthouse Design, then decided to join forces with
Netscape - with two proven(?) toolkits, the politics worked out such that
all the AWT people at Sun ended up leading the newly-joined team, and the
working code from the other parties discarded, and from this, Swing was
born.

http://talblog.info/archives/2007/01/sundown.html
http://www.noodlesoft.com/blog/2007/01/23/the-sun-also-sets/

-- 
Duncan.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-14 Thread Jecel Assumpcao Jr.
Pascal J. Bourguignon wrote:
 No idea, but since they invented Java, they could have at a much lower  
 cost written their own implementation of Smalltalk.

or two (Self and Strongtalk).

Of course, Self had to be killed in favor of Java since Java ran in just
a few kilobytes while Self needed a 24MB workstation and most of Sun's
clients still had only 8MB (PC users were even worse, at 4MB and under).

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-14 Thread John Zabroski
Wow!  Thanks for that amazing nugget of Internet history.

Fun fact: Tony Duarte wrote the book Writing NeXT Programs under the
pseudonym Ann Weintz because supposedly Steve Jobs was so secretive that he
told employees not to write books about the ideas in NeXT's GUI.  See:
http://www.amazon.com/Writing-Next-Programs-Introduction-Nextstep/dp/0963190105/where
Tony comments on it.



On Thu, Oct 14, 2010 at 7:53 PM, Duncan Mak duncan...@gmail.com wrote:

 On Thu, Oct 14, 2010 at 6:51 PM, John Zabroski johnzabro...@gmail.comwrote:

 That being said, I have no idea why people think Smalltalk-80 would have
 been uniformly better than Java.  I am not saying this to be negative.  In
 my view, much of the biggest mistakes with Java were requiring insane legacy
 compatibility, and doing it in really bad ways.  Swing should have never
 have been forced to reuse AWT, for example.  And AWT should never have had a
 concrete component model, thus forcing Swing to inherit it (dropping the
 rabbit ears, because I see no good explanation for why it had to inherit
 AWT's component model via implementation inheritance).  It's hard for me
 to even guage if the Swing developers were good programmers or not, given
 that ridiculously stupid constraint.  It's not like Swing even supported
 phones, it was never in J2ME.  The best I can conclude is that they were not
 domain experts, but who really was at the time?


 I started programming Swing a year ago and spent a little time learning its
 history when I first started. I was able to gather a few anecdotes, and they
 have fascinated me.

 There were two working next-generation Java GUI toolkits at the time of
 Swing's conception - Netscape's IFC and Lighthouse Design's LFC - both
 toolkits were developed by ex-NeXT developers and borrowed heavily from
 AppKit's design. IFC even had a design tool that mimicd Interface Builder
 (which still lives on today in Cocoa).

 Sun first acquired Lighthouse Design, then decided to join forces with
 Netscape - with two proven(?) toolkits, the politics worked out such that
 all the AWT people at Sun ended up leading the newly-joined team, and the
 working code from the other parties discarded, and from this, Swing was
 born.

 http://talblog.info/archives/2007/01/sundown.html
 http://www.noodlesoft.com/blog/2007/01/23/the-sun-also-sets/

 --
 Duncan.

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-11 Thread Ryan Mitchley

It seems that a logic programming inspired take on types might be useful:
e.g. ForAll X such that X DoThis is defined, X DoThis

or maybe, ForAll X such that X HasMethodReturning Y and Y DoThis is 
defined, Y DoThis


Or, how about, pattern matching on message reception? Allow free 
variables in the method prototype so that inexact matching is possible? 
Send a message to a field of objects, and all interpret the message as 
it binds to their receptors...



On 09/10/2010 04:57, Casey Ransberger wrote:

I think type is a foundationaly bad idea. What matters is that the object in 
question can respond intelligently to the message you're passing it. Or at least, that's 
what I think right now, anyway. It seems like type specification (and as such, early 
binding) have a very limited real use in the domain of 
really-actually-for-real-and-seriously mission critical systems, like those that guide 
missiles or passenger planes.

   



Disclaimer: http://www.peralex.com/disclaimer.html



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-10 Thread Paul D. Fernhout

On 10/10/10 2:25 AM, Dirk Pranke wrote:

On Sat, Oct 9, 2010 at 8:50 PM, Paul D. Fernhout
pdfernh...@kurtz-fernhout.com  wrote:

On 10/9/10 3:45 PM, Dirk Pranke wrote:


C++ is a significant security concern; and it is reasonable to want a
browser written in a memory-safe language.

Unfortunately, web browsers are large, extremely
performance-sensitive, legacy applications. All of the major browsers
are written in some combination of  C, C++, and Objective-C (and
undoubtedly assembly in isolated areas like the JITs), and it's
unclear if one can reasonably hope to see a web browser written from
scratch in a new language to ever hope to render the majority of the
current web correctly; the effort may simply be too large. I was not
aware of Lobo; it looks interesting but currently idle, and is a fine
example of this problem.

I continue to hope, but I may be unreasonable :)


Yes, that seems like a good description of the problem.

How about this as a possibility towards a solution ...


I think I'd rather try to write a browser from scratch than
debug/maintain your solution ;)


Sure, with today's tools debugging a solution developed at a higher level 
of abstraction would be hard. So, sure, this is why no one does what I 
proposed with parting C++ into an abstraction, working with the abstraction, 
and then regenerating C++ as an assembly language, and then trying to debug 
the C++ and change the abstraction and have round trip problems with all 
that. Still, I bet people said that about Fortran -- how can you possibly 
debug a Fortran program when what you care about is the assembler instructions?


But like I implied at the start, by the (imagined) standards of, say, 2050, 
we don't have any debuggers worth anything. :-) To steal an idea from 
Marshall Brain:

  http://sadtech.blogspot.com/2005/01/premise-of-sadtech.html
Have you ever talked with a senior citizen and heard the stories? Senior 
citizens love to tell about how they did things way back when. For 
example, I know people who, when they were kids, lived in shacks, pulled 
their drinking water out of the well with a bucket, had an outhouse in the 
back yard and plowed the fields using a mule  and a hand plow. These people 
are still alive and kicking -- it was not that long ago that lots of people 
in the United States routinely lived that way. ... When we look at this kind 
of stuff from today's perspective, it is so sad. The whole idea of spending 
200 man-hours to create a single shirt is sad. The idea of typing a program 
one line at a time onto punch cards is sad, and Lord help you if you ever 
dropped the deck. The idea of pulling drinking water up from the well by the 
bucketful or crapping in a dark outhouse on a frigid winter night is sad. 
Even the thought of using the original IBM PC in 1982, with its 4.77 MHz 
processor, single-sided floppy disk and 16 KB of RAM is sad when you look at 
it just 20 years later. Now we can buy machines that are 1,000 times faster 
and have a million times more disk space for less than $1,000. But think 
about it -- the people who used these technologies at the time thought that 
they were on the cutting edge. They looked upon themselves as cool, hip, 
high-tech people: ...


So, people fire up GDB on C++ (or whatever) and think they are cool and hip. 
:-) Or, for a more realistic example given C++ is a couple decades old, 
people fire up Firebug on their JavaScript an think they are cool and hip. 
:-) And, by today's standards, they are:

  http://getfirebug.com/
But by the standards of the future of new computing in 2050, Firebug, as 
awesome as it is now, lacks things that one might think would be common in 
2050, like:

  * monitoring a simulated end user's cognitive load;
  * monitoring what is going on at hundreds of network processing nodes
  * integration with to do lists and workflow management;
  * really useful conversational suggestions about things to think about in 
regard to potential requirements, functionality, algorithmic, 
implementation, networking, testing, social, paradigm, security, and 
stupidity bugs as well as all the other types of possible bugs we maintain 
in an entomological catalog in a semantic web (a few specimens of which are 
listed here:

  http://weblogs.asp.net/fbouma/archive/2003/08/01/22211.aspx );
  * easy archiving of the traces of sessions and an ability to run things 
backwards or in a parallelized many-worlds environment; and
  * an ability to help you debug an applications multiple abstractions 
(even if it is cool that it can help you debug what is going on with the 
server a bit: http://getfirebug.com/wiki/index.php/Firebug_Extensions )


So, to add something to a to do list for a universal debugger, it should 
be able to transparently deal with the interface between different levels of 
abstraction. :-) As well as all those other things.


Essentially, we need a debugger who is an entomologist. Or, to take it one 
step further, we need a debugger that 

Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-10 Thread Leo Richard Comerford
On 10 October 2010 07:31, Dirk Pranke dpra...@chromium.org wrote:

 But there's always NativeClient (as someone else linked to) if you
 need raw speed :)

 -- Dirk

Not just Native Client: a Native Client app including a GUI toolkit.
(A toolkit which will soon be talking to an OpenGL-derived interface,
or is it already?) At this point you're almost ready to rip out most
of the surface area of the runtime interface presented by the browser
to the application running in each tab. You still need things similar
to (say) a HTML renderer, but you don't need the browser vendor's
choice of monolithic HTML renderer riveted into a fixed position in
every browser tab's runtime.

(That link again:
http://labs.qt.nokia.com/2010/06/25/qt-for-google-native-client-preview/#comment-7893
)

On 10 October 2010 01:44, John Zabroski johnzabro...@gmail.com wrote:

 To be fair, Tim had the right idea with a Uri.

He also had a right idea with the Principle of Least Power. Thesis,
antithesis...

Leo.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-10 Thread Paul D. Fernhout

On 10/9/10 8:44 PM, John Zabroski wrote:

From experience, most people don't want to
discuss this because they're happy with Good Enough and scared of testing
something better.  They are always male, probably 40'ish, probably have a
wife and two kids.  We're on two different planets, so I understand
different priorities.


Well, that is probably pretty close by coincidence to describing me. :-)

But, as much as I agree with the general thrust of your arguments about 
design issues, including the network is the computer that needs debugging 
(and better design), I think there is another aspect of this that related to 
Manuel De Landa's point on design and meshworks, hierarchies, and interfaces.


See:
  Meshwork, Hierarchy, and Interfaces
  http://www.t0.or.at/delanda/meshwork.htm
To make things worse, the solution to this is not simply to begin adding 
meshwork components to the mix. Indeed, one must resist the temptation to 
make hierarchies into villains and meshworks into heroes, not only because, 
as I said, they are constantly turning into one another, but because in real 
life we find only mixtures and hybrids, and the properties of these cannot 
be established through theory alone but demand concrete experimentation. 
Certain standardizations [of interfaces], say, of electric outlet designs or 
of data-structures traveling through the Internet, may actually turn out to 
promote heterogenization at another level, in terms of the appliances that 
may be designed around the standard outlet, or of the services that a common 
data-structure may make possible. On the other hand, the mere presence of 
increased heterogeneity is no guarantee that a better state for society has 
been achieved. After all, the territory occupied by former Yugoslavia is 
more heterogeneous now than it was ten years ago, but the lack of uniformity 
at one level simply hides an increase of homogeneity at the level of the 
warring ethnic communities. But even if we managed to promote not only 
heterogeneity, but diversity articulated into a meshwork, that still would 
not be a perfect solution. After all, meshworks grow by drift and they may 
drift to places where we do not want to go. The goal-directedness of 
hierarchies is the kind of property that we may desire to keep at least for 
certain institutions. Hence, demonizing centralization and glorifying 
decentralization as the solution to all our problems would be wrong. An open 
and experimental attitude towards the question of different hybrids and 
mixtures is what the complexity of reality itself seems to call for. To 
paraphrase Deleuze and Guattari, never believe that a meshwork will suffice 
to save us. {11}


So, that's where the thrust of your point gets parried, :-) even if I may 
agree with your technical points about what is going to make good software. 
And even, given, as suggested in my previous note, that a bug in our 
socio-economic paradigm has led to the adoption of software that is buggy 
and sub-optimal in all sorts of ways. (I feel that having more people learn 
about the implications of cheap computing is part of fixing that 
socio-economic bug related to scarcity thinking in an abundant world.)


Building on Manuel De Landa's point, JavaScript as the ubiquitous standard 
local VM in our lives that potentially allows a diversity of things built on 
a layer above that (like millions of educational web pages about a variety 
of things). JavaScript/ECMAScript may be a language with various warts, it 
may be 100X slower than it needs to be as a VM, it may not be 
message-oriented, it may have all other issues including a standardization 
project that seems bent on removing all the reflection from it that allows 
people to write great tools for it, and so on, but it is what we have now as 
a sort-of social hierarchy defined standard for content that people can 
interact with using a web browser (or an email system that is extended in 
it, like Thunderbird). And so people are building a lot of goodness on top 
of JavaScript, like with Firebug and its extensions.

  http://getfirebug.com/wiki/index.php/Firebug_Extensions
One may rightfully say JavaScript has all sorts of issues (as a sort of 
hastily cobbled together hierarchy of common functionality), but all those 
extensions show what people in practice can do with it (through the power of 
a social meshwork).


And that syncs with a deeper implication of your point about, in Sun's 
language, that the network is the computer, and thus we need to adjust our 
paradigms (and tools) to deal with that. But whereas you are talking more 
about the technical side of that, there is another social side of that as 
well. Both the technical side and social side (and especially their 
interaction in the context of competition and commercial pressures and 
haste) can lead to unfortunate compromises in various ways. Still, it is 
what we have right now, and we can think about where that momentum is going 
and how to best 

Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-10 Thread Julian Leviston
I'm not entirely sure why the idea of pattern expressions and meta-translators 
wasn't an awesome idea.

If expressing an idea cleanly in a language is possible, and expressing that 
language in another language clearly and cleanly is possible, why is it not 
possible to write a tool which will re-express that original idea in the second 
language, or any other target language for that matter?

I thought this development of a meta-translator was not only one of the FUNC 
goals, but one that had for the most part been at least completed?

Julian.

On 11/10/2010, at 1:38 AM, Paul D. Fernhout wrote:

 Anyway, so I do have hope that we may be able to develop platforms that let 
 us work at a higher level of abstraction like for programming or semantic web 
 knowledge representation, but we should still accept that (de facto social) 
 standards like JavaScript and so on have a role to play in all that, and we 
 need to craft our tools and abstractions with such things in mind (even if we 
 might in some cases just use them as a VM until we have something better). 
 That has always been the power of, say, the Lisp paradigm, even as 
 Smalltalk's message passing paradigm has a lot going for it as a unifying and 
 probably more scalable abstraction. What can be frustrating is when our 
 bosses say write in Fortran instead of saying write on top of Fortran, 
 same as if they said, Write in assembler instead of Write on top of 
 assembler or JavaScript or whatever. I think work like VPRI through COLA is 
 doing to think about getting the best of both worlds there is a wonderful 
 aspiration, kind of like trying to understand the particle/wave duality 
 mystery in physics (which it turns out is potentially explainable by a 
 many-worlds hypothesis, btw). But it might help to have better tools to do 
 that -- and tools that linked somehow with the semantic web and social 
 computing and so on.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-10 Thread Leo Richard Comerford
On 10 October 2010 14:01, Leo Richard Comerford leocomerf...@gmail.com wrote:

  You still need things similar
 to (say) a HTML renderer, but you don't need the browser vendor's
 choice of monolithic HTML renderer riveted into a fixed position in
 every browser tab's runtime.

Let me rephrase that a bit for clarity, to Those applications still
need things similar to HTML renderers (and scripting languages), but
they don't need the browser vendor's choice of monolithic HTML
renderer riveted into a fixed position in their runtime

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-10 Thread Julian Leviston
My answer can be best expressed simply and deeply thus:

I don't see the unix command 'ls' being rewritten every day or even every 
year.

Do you understand what I'm trying to get at? It's possible to use an 'ls' 
replacement if I so choose, but that's simply my preference. 'ls' itself hasn't 
been touched much in a long time. The same as the ADD assembler instruction is 
pretty similar across platforms. Get my drift?

Part of the role of a language meta-description is implementation of every 
translatable artefact.  Thus if some source item requires some widget, that 
widget comes with it along for the ride as part of the source language (and 
framework) meta-description.

I'm possibly missing something, but I don't see the future as being a simple 
extension of the past... it should not be that we simply create bigger worlds 
as we've done in the past (think virtual machines) but rather looks for ways to 
adapt things from worlds to integrate with each other. Thus, I should not be 
looking for a better IDE, or programming environment, but rather take the 
things I like out of what exists... some people like to type their code, others 
like to drag 'n drop it. I see no reason why we can't stop trying to re-create 
entire universes inside the machines we use and simply split things at a 
component-level. We're surely smarter than reinventing the same pattern again 
and again.

Julian.

On 11/10/2010, at 2:39 AM, Paul D. Fernhout wrote:

 Software is never done. :-) Especially because the world keeps changing 
 around it. :-) And especially when it is research and doing basic research 
 looking for new ideas. :-)


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-09 Thread Leo Richard Comerford
I have mentioned this before, but
http://labs.qt.nokia.com/2010/06/25/qt-for-google-native-client-preview/#comment-7893
.

Leo.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-09 Thread Dirk Pranke
On Fri, Oct 8, 2010 at 11:09 PM, Paul D. Fernhout
pdfernh...@kurtz-fernhout.com wrote:
 Yes, there are similarities, you are right. I'm not familiar in detail
 because I have not used Chrome or looked at the code, but to my
 understanding Chrome does each tab as a separate process. And typically (not
 being an expert on Chrome) that process would run a rendering engine (or
 maybe not?), JavaScript (presumably?), and/or whatever downloaded plugins
 are relevant to that page (certainly?).


Yes, each tab is roughly a separate process (real algorithm is more
complicated, as the wikipedia article says). rendering and JS are in
the same process, but plugins run in separate sandboxed processes.

C++ is a significant security concern; and it is reasonable to want a
browser written in a memory-safe language.

Unfortunately, web browsers are large, extremely
performance-sensitive, legacy applications. All of the major browsers
are written in some combination of  C, C++, and Objective-C (and
undoubtedly assembly in isolated areas like the JITs), and it's
unclear if one can reasonably hope to see a web browser written from
scratch in a new language to ever hope to render the majority of the
current web correctly; the effort may simply be too large. I was not
aware of Lobo; it looks interesting but currently idle, and is a fine
example of this problem.

I continue to hope, but I may be unreasonable :)

-- Dirk

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-09 Thread Dethe Elza
On 2010-10-09, at 12:45 PM, Dirk Pranke wrote:
 [...] it's
 unclear if one can reasonably hope to see a web browser written from
 scratch in a new language to ever hope to render the majority of the
 current web correctly; the effort may simply be too large. I was not
 aware of Lobo; it looks interesting but currently idle, and is a fine
 example of this problem.
 
 I continue to hope, but I may be unreasonable :)

The Mozilla Foundation is creating the Rust language explicitly to have an 
alternative to C++ for building a web browser, so it may not be that 
unreasonable, in the medium term. Progress on Google's Go language as an 
alternative to C, and the addition of garbage collection to Objective-C, show 
there is a wide-spread need for alternatives to C/C++ among the folks who 
create browsers.

Rust:
http://github.com/graydon/rust/wiki/Project-FAQ

--Dethe

http://livingcode.org/
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-09 Thread John Zabroski
On Fri, Oct 8, 2010 at 9:17 PM, Dirk Pranke dpra...@chromium.org wrote:

 On Fri, Oct 8, 2010 at 11:28 AM, John Zabroski johnzabro...@gmail.com
 wrote:
  JavaScript also doesn't support true delegation, as in the Actors Model
 of
  computation.
 
  Also, Sencha Ext Designer is an abomination.  It is a fundamental
  misunderstanding of the Web and how to glue together chunks of text via
  hyperlinks.  It is the same story for any number of technologies that
  claim to fix the Web, including GWT... they are all not quite up to
 par,
  at least by my standards.
 
  The fundamental problem with the Web is the Browser.  This is the
 monsterous
  bug.
 
  The fundamental problem with Sencha Ext is that the quality of the code
  isn't that great (many JavaScript programmers compound the flaws of the
  Browser by not understanding how to effectively program against the
 Browser
  model), and it also misunderstands distributed computing.  It encourages
  writing applications as if they were still single-tier IBM computers from
  the 1970s/80s costing thousands of dollars.
 
  Why are we stuck with such poor architecture?
 

 Apologies if you have posted this before, but have you talked anywhere
 in more detail about what the monsterous bug is (specifically),


I have not discussed this much on the FONC list.  My initial opinions were
formed from watching Alan Kay's 1997 OOPSLA speech, The Computer Revolution
Hasn't Happened Yet (available on Google Video [1]).  One of Alan Kay's key
criticisms from that talk was that You don't need a Browser.  I've
mentioned this quote and its surrounding context on the FONC list in the
past.  The only comment I've made on this list in the past, related to this
topic, is that the View Source feature in the modern Browser is an
abomination and completely non-object-oriented.  View Source is a property
of the network, not the document.  The document is just exposing a
representation.  If you want to build a debugger, then it needs to be based
on the network, not tied to some Browser that has to know how to interpret
its formats.  What we have right now with View Source is not a true Source.
It's some barf the Browser gives you because you don't know any better to
demand something better, richer.  Sure, it's possible if View Source is a
property of the network that a subnet can always refuse your question.
That's to be expected.  When that happens, you can fallback to the kludgy
View Source you have today.

99% of the people in this world to this point have been happy with it,
because they just haven't thought about what something better should do.
All they care about is if they can steal somebody else's site design and
JavaScript image rollover effect, because editing and stealing live code is
even easier than googling and finding a site with these Goodies. And the
site they would've googled probably just used the same approach of seeing
some cool effect and using View Source.  The only difference is the sites
about DHTML design decorated the rip with an article explaining how to use
it and common pitfalls the author encountered in trying to steal it.

For some context, to understand Alan's criticisms, you have to know Alan's
research and his research groups.  For example, in the talk I linked above,
Alan makes an overt reference to Tim Berners-Lee's complete
non-understanding of how to build complex systems (Alan didn't call out Tim
directly, but he said the Web is what happens when physicists play with
computers).  Why did Alan say such harsh things?  Because he can back it
up.  Some of his work at Apple on the Vivarium project was focused on
something much better than Tim's Browser.  Tim won because people didn't
understand the difference and didn't really care for it (Worse Is Better).
To be fair, Tim had the right idea with a Uri.  Roy Fielding's Ph.D. thesis
explains this (and hopefully if you're working on Chromium you've read that
important thesis, since it is probably as widely read as Claude Shannon's on
communication).  And both Alan and Tim understood languages needed good
resource structure.  See Tim's talk at the first-ever JavaOne in 1997 [2]
and Tim's criticism of Java (where he admits never having seen the language,
or used it, just stating what he thinks the most important thing about a VM
is [3]) But that's all Tim got right at first.  Tim got distracted by his
Semantic Web vision.  Sometimes having too huge a vision prevents you from
working on important small problems.  Compare Tim's quote in [3] to Alan's
comments about Java in [1] where Alan talks about meta-systems and
portability.


 or
 how programming for the web misunderstands distributed computing?


I've written on my blog a few criticisms, such as a somewhat incoherent
critique of what some developers called SOFEA architecture. For an
introduction to SOFEA, read [4].  My critique, which again was just me
rambling about its weaknesses and not meant for widespread consumption, can
be found at [5].  I 

Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-09 Thread John Zabroski
On Sat, Oct 9, 2010 at 3:45 PM, Dirk Pranke dpra...@chromium.org wrote:

 On Fri, Oct 8, 2010 at 11:09 PM, Paul D. Fernhout
 pdfernh...@kurtz-fernhout.com wrote:
  Yes, there are similarities, you are right. I'm not familiar in detail
  because I have not used Chrome or looked at the code, but to my
  understanding Chrome does each tab as a separate process. And typically
 (not
  being an expert on Chrome) that process would run a rendering engine (or
  maybe not?), JavaScript (presumably?), and/or whatever downloaded plugins
  are relevant to that page (certainly?).
 

 Yes, each tab is roughly a separate process (real algorithm is more
 complicated, as the wikipedia article says). rendering and JS are in
 the same process, but plugins run in separate sandboxed processes.

 C++ is a significant security concern; and it is reasonable to want a
 browser written in a memory-safe language.

 Unfortunately, web browsers are large, extremely
 performance-sensitive, legacy applications. All of the major browsers
 are written in some combination of  C, C++, and Objective-C (and
 undoubtedly assembly in isolated areas like the JITs), and it's
 unclear if one can reasonably hope to see a web browser written from
 scratch in a new language to ever hope to render the majority of the
 current web correctly; the effort may simply be too large. I was not
 aware of Lobo; it looks interesting but currently idle, and is a fine
 example of this problem.

 I continue to hope, but I may be unreasonable :)



Most major browsers are doing a much better job making sure they understand
performance, how to evaluate it and how to improve it.  C, C++, Objective-C,
it doesn't matter.  The key is really domain-specific knowledge and knowing
what to tune for.  You need a huge benchmark suite to understand what users
are actually doing.

Big advances in browsers are being had thanks to research into parallelizing
browser rendering.  But there are a host of other bottlenecks, as
Microsoft's IE team pointed out.  Ultimately when you are tuning at this
scale everything looks like a compiler design issue, not a C, C++,
Objective-C issue.  These all just become high level assembly languages.  A
garbage JavaScript engine in C is no good.  A fast JavaScript engine in C
written with extensive performance tuning and benchmarks to ensure no
performance regressions is good.

Silverlight (managed runtime) blows away pretty much even hand-tuned
JavaScript apps like the ones Google writes, by the way... unfortunately it
uses no hardware accelerated rendering that I'm aware of.  Current browser
vendors have version branches w/ hardware accelerated rendering that
parallelizes perfectly.  Silverlight has its own problems, of course.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-08 Thread Paul D. Fernhout
The PataPata project (by me) attempted to bring some ideas for Squeak and 
Self to Python about five years ago. A post mortem critique on it from four 
years ago:

  PataPata critique: the good, the bad, the ugly
  http://patapata.sourceforge.net/critique.html

I am wondering if there is some value in reviving the idea for JavaScript?

Firebug shows what is possible as a sort of computing microscope for 
JavaScript and HTML and CSS. Sencha Ext Designer shows what is possible as 
far as interactive GUI design.


Here are some comments I just posted to the list there that touch on system 
design, Squeak, Self, and some FONC issues:

  Thoughts on PataPata four years later...

http://sourceforge.net/mailarchive/message.php?msg_name=4CACF913.8030405%40kurtz-fernhout.com

http://sourceforge.net/mailarchive/message.php?msg_name=4CAF294B.8010101%40kurtz-fernhout.com

Anyway, I still think the telescope/microscope issue in designing new 
computing systems is important, as I suggested here in 2007:
[Edu-sig] Comments on Kay's Reinvention of Programming proposal (was Re: 
More Pipeline News)

http://mail.python.org/pipermail/edu-sig/2007-March/007822.html
There is once again the common criticism leveled at Smalltalk of being too 
self-contained. Compare this proposal with one that suggested making tools 
that could be used like  telescope or a microscope for relating to code 
packages in other languages -- to use them as best possible on their own 
terms (or perhaps virtualized internally). Consider how the proposal 
suggests scripting all the way down -- yet how are the scripting tools built 
in Squeak? Certainly not with the scripting language. And consider there are 
always barriers to any system -- where you hit the OS, or CPU microcode, or 
proprietary hardware specifications, or even unknowns in quantum physics, 
and so on. :-) So every system has limits. But by pretending this one will 
not, this project may miss out on the whole issue of interfacing to systems 
beyond those limits in a coherent way.


Biology made a lot of progress by inventing the microscope -- and that was 
done way before it invented genetic engineering, and even before it 
understood there were bacteria around. :-)


What are our computing microscopes now? What are our computing telescopes? 
Are debuggers crude computing microscopes? Are class hierarchy browsers and 
package managers and IDEs and web browsers crude computing telescopes?


Maybe we need to reinvent the computing microscope and computing telescope 
to help in trying to engineer better digital organisms via FONC? :-) Maybe 
it is more important to do it first?


But sure, everyone is going to say, we have all the debuggers we need, 
right? We have all the inspectors, browsers, and so forth we could use?


I know, inventing a microscope probably sounded crazy at the time too:
  http://inventors.about.com/od/mstartinventions/a/microscopes.htm
1590 – Two Dutch eye glass makers, Zaccharias Janssen and son Hans Janssen 
experimented with multiple lenses placed in a tube. The Janssens observed 
that viewed objects in front of the tube appeared greatly enlarged, creating 
both the forerunner of the compound microscope and the telescope.


After, all, everyone in 1590 probably thought biology was already well 
understood. :-) Dealing with Evil humors and Bad air and Bad blood 
(requiring leaching) were obvious solutions to what ailed most people -- 
there was no need to suggest tiny itty-bitty creatures were everywhere, what 
an absurd idea. Utter craziness.


BTW, the guy who proposed less patients would die if doctors washed their 
hands before examining them (before much was known about bacteria) 
essentially got beaten to death for his troubles, too:

  http://en.wikipedia.org/wiki/Ignaz_Semmelweis

This guy (Herbert Shelton) was hounded by the police and the medical 
establishment for suggesting about a century ago that sunlight, fasting, and 
eating whole foods was a recipe for good health:

  http://www.soilandhealth.org/02/0201hyglibcat/shelton.bio.bidwell.htm

That recipe for good health is one we are only rediscovering now, like with 
the work of Dr. John Cannell and Dr. Joel Fuhrman putting the pieces 
together, based on the science and scientific tools and communications 
technology that Herbert Shelton did not have.

  http://www.vitamindcouncil.org/treatment.shtml
  http://www.youtube.com/watch?v=wPiR9VcuVWw

Still, Herbert Shelton missed one big idea (about DHA) that probably did him in:
  http://www.drfuhrman.com/library/lack_of_DHA_linked_to_Parkinsons.aspx

So, it can be hart to introduce new ideas, between the risk of being 
beaten to death by the establishment (after all, how could a gentleman 
doctor's hands be unclean?) and the problem of only being 95% technically 
right and getting 5% technically or socially very wrong.


Could the semantic web be the equivalent of the unknown bacteria of the 
1590s? What would computing look like as a semantic web? Alan 

Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-08 Thread Waldemar Kornewald
On Fri, Oct 8, 2010 at 8:28 PM, John Zabroski johnzabro...@gmail.com wrote:
 Why are we stuck with such poor architecture?

A bad language attracts bad code. ;)

Bye,
Waldemar

-- 
Django on App Engine, MongoDB, ...? Browser-side Python? It's open-source:
http://www.allbuttonspressed.com/blog/django

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-08 Thread John Zabroski
Even modern technology like Windows Phone 7 encourages, as part of their App
Store submission guidelines, that the app hardwire support for two screen
resolutions.  This is bizarre considering the underlying graphics
implementation is resolution-independent.

These bad choices add up.  As Gerry Weinberg wrote in Secrets of Consulting,
*Things are the way they are because they got that way ... one logical step
at a time*.

But bad choices keeps us employed in our current roles (as consultants, as
in-house IT, etc.).

Cheers,
Z-Bo

On Fri, Oct 8, 2010 at 2:38 PM, Waldemar Kornewald wkornew...@freenet.dewrote:

 On Fri, Oct 8, 2010 at 8:28 PM, John Zabroski johnzabro...@gmail.com
 wrote:
  Why are we stuck with such poor architecture?

 A bad language attracts bad code. ;)

 Bye,
 Waldemar

 --
 Django on App Engine, MongoDB, ...? Browser-side Python? It's open-source:
 http://www.allbuttonspressed.com/blog/django

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-08 Thread Paul D. Fernhout

On 10/8/10 1:51 PM, Waldemar Kornewald wrote:

On Fri, Oct 8, 2010 at 5:20 PM, Paul D. Fernhout
pdfernh...@kurtz-fernhout.com  wrote:

The PataPata project (by me) attempted to bring some ideas for Squeak and
Self to Python about five years ago. A post mortem critique on it from four
years ago:
  PataPata critique: the good, the bad, the ugly
  http://patapata.sourceforge.net/critique.html


In that critique you basically say that prototypes *maybe* aren't
better than classes, after all. On the other hand, it seems like most
problems with prototypes weren't related to prototypes per se, but the
(ugly?) implementation in Jython which isn't a real prototype-based
language. So, did you have a fundamental problem with prototypes or
was it more about your particular implementation?


Waldemar-

Thanks for the comments.

A main practical concern was the issue of managing complexity and 
documenting intent in a prototype system, especially in a rough-edges 
environment that mixed a couple layers (leading to confusing error messages).


Still, to some extent, saying complexity is a problem is kind of like 
blaming disease on bad vapors. We need to be specific about things to do, 
like suggesting that you usually have to name things before you can share 
them and talking about naming systems or ways of reconciling naming 
conflicts, which is the equivalent of understanding that there is some 
relation between many diseases and bacteria, even if that does not tell you 
exactly what you should be doing about bacteria (given that people could not 
survive without them).


I think, despite what I said, the success of JavaScript does show that 
prototypes can do the job -- even though, as Alan Kay said, what is most 
important about the magic of Smalltalk is message passing not objects (or 
classes, or for that matter prototypes). I think how prototype languages 
work in practice, without message passing, and with a confusion of 
communicating with them through either slots of functions, is problematical; 
so I'd still rather be working in Smalltalk, which ultimately I think has a 
paradigm that can scale better, than either imperative or functional languages.


But, the big picture issue I wanted to raise isn't about prototypes. It as 
about more general issues -- like how do we have general tools that let us 
look at all sorts of computing abstractions?


In biology, while it's true there are now several different types of 
microscopes (optical, electron, STM, etc.) in general, we don't have a 
special microscope developed for every different type of organism we want to 
look at, which is the case now with, say, debuggers and debugging processes.


So, what would the general tools look like to let us debug anything? And I'd 
suggest, that would not be gdb as useful as that might be.


I can usefully point the same microscope at a feather, a rock, a leaf, and 
pond water. So' why can't I point the same debugger at Smalltalk image, a 
web page with JavaScript served by a Python CGI script, a VirtualBox 
emulated Debian installation, and a semantic web trying to understand a 
jobless recovery?


I know that may sound ludicrous, but that's my point. :-)

But when you think about it, there might be a lot of similarities at some 
level in thinking about those four things in terms of displaying 
information, moving between conceptual levels, maintaining to do lists, 
doing experiments, recording results, communicating progress, looking at 
dependencies, reasoning about complex topics, and so on. But right now, I 
can't point one debugger at all those things, and even suggesting that we 
could sounds absurd. Of course, most things that sound absurd really are 
absurd, but still: If at first, the idea is not absurd, then there is no 
hope for it (Albert Einstein)


In March, John Zabroski wrote: I am going to take a break from the previous 
thread of discussion.  Instead, it seems like most people need a tutorial in 
how to think BIG.


And that's what I'm trying to do here. There are billions of computers out 
there running JavaScript, HTML, and CSS (and some other stuff, powered by 
CGI stuff). How can we think big about that overall global message passing 
system? Billions of computers connected closely to humans supported with 
millions of customized tiny applications (each a web page) interacting as a 
global dynamic semantic space are just going to be more interesting than a 
few thousand computers running some fancy new kernel with some fancy new 
programming language. But, should not new computing take in account this 
reality somehow?


I've also got eight cores on my desktop, most of them idle most of the time, 
and I have electric heat so most of the year it does not cost me anything to 
run them. So, raw performance is not so important as it used to be. What is 
important are the conceptual abstractions as well as the practical 
connection with what people are willing to easily try.


Now, people made the same 

Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-08 Thread John Zabroski
On Fri, Oct 8, 2010 at 5:04 PM, Paul D. Fernhout 
pdfernh...@kurtz-fernhout.com wrote:


 But, the big picture issue I wanted to raise isn't about prototypes. It as
 about more general issues -- like how do we have general tools that let us
 look at all sorts of computing abstractions?

 In biology, while it's true there are now several different types of
 microscopes (optical, electron, STM, etc.) in general, we don't have a
 special microscope developed for every different type of organism we want to
 look at, which is the case now with, say, debuggers and debugging processes.

 So, what would the general tools look like to let us debug anything? And
 I'd suggest, that would not be gdb as useful as that might be.



Computer scientists stink at studying living systems.  Most computer
scientists have absolutely zero experience studying and programming living
systems.  When I worked at BNL, I would have lunch with a biologist who was
named after William Tecumseh Sherman, who wrote his Ph.D. at NYU about
making nano-organisms dance.  That's the level of understanding and
practical experience I am talking about.

As for making things debuggable, distributed systems have a huge need for
compression of communication, and thus you can't expect humans to debug
compressed media.  You need a way to formally prove that when you uncompress
the media, you can just jump right in and debug it.  There have been
advances in compiler architecture geared towards this sort of thinking, such
as the logic of bunched implications viz a viz Separation Logic and even
more practical ideas towards this sort of thinking, such as Xavier LeRoy's
now famous compiler architecture for proving optimizing compiler
correctness.  The sorts of transformations that an application compiler like
GWT makes are pretty fancy, and if you want to look at the same GWT
application without compression today and just study what went wrong with
it, you can't.  What you need to do, in my humble opinion, is focus on
proving that mappings between representations is isomorphic and non-lossy,
even if one representation needs hidden embeddings (interpreted as no-ops by
a syntax-directed compiler) to map back to the other.

There are also other fancy techniques being developed in programming
language theory (PLT) right now.  Phil Wadler and Jeremy Siek's Blame
Calculus is a good illustration of how to study a living system in a
creative way (but does not provide a complete picture, akin to not knowing
you need to stain a slide before putting it under the microscope), and so is
Carl Hewitt's ActorScript and Direct Logic.  These are the only efforts I am
aware of that try to provide some information on why something happened at
runtime.


 I can usefully point the same microscope at a feather, a rock, a leaf, and
 pond water. So' why can't I point the same debugger at Smalltalk image, a
 web page with JavaScript served by a Python CGI script, a VirtualBox
 emulated Debian installation, and a semantic web trying to understand a
 jobless recovery?

 I know that may sound ludicrous, but that's my point. :-)



What the examples I gave above have in common is that there are certain
limitations on how general you can make this, just as Oliver Heaviside
suggested we discard the balsa wood ship models for engineering equations
derived from Maxwell.



 But when you think about it, there might be a lot of similarities at some
 level in thinking about those four things in terms of displaying
 information, moving between conceptual levels, maintaining to do lists,
 doing experiments, recording results, communicating progress, looking at
 dependencies, reasoning about complex topics, and so on. But right now, I
 can't point one debugger at all those things, and even suggesting that we
 could sounds absurd. Of course, most things that sound absurd really are
 absurd, but still: If at first, the idea is not absurd, then there is no
 hope for it (Albert Einstein)

 In March, John Zabroski wrote: I am going to take a break from the
 previous thread of discussion.  Instead, it seems like most people need a
 tutorial in how to think BIG.

 And that's what I'm trying to do here.


Thanks for the kind words.  I have shared my present thoughts with you here.
 But a tutorial in my eyes is more about providing people with a site where
they can go to and just be engulfed in big, powerful ideas.  The FONC wiki
is certainly not that.  Most of the interesting details in the project are
buried and not presented in exciting ways, or if they are, they are still
buried and require somebody to dig it up.  That is a huge bug.  In short,
the FONC wiki is not even a wiki.  It is a chalkboard with one chalk stick,
and it is locked away in some teacher's desk.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-08 Thread Dirk Pranke
On Fri, Oct 8, 2010 at 11:28 AM, John Zabroski johnzabro...@gmail.com wrote:
 JavaScript also doesn't support true delegation, as in the Actors Model of
 computation.

 Also, Sencha Ext Designer is an abomination.  It is a fundamental
 misunderstanding of the Web and how to glue together chunks of text via
 hyperlinks.  It is the same story for any number of technologies that
 claim to fix the Web, including GWT... they are all not quite up to par,
 at least by my standards.

 The fundamental problem with the Web is the Browser.  This is the monsterous
 bug.

 The fundamental problem with Sencha Ext is that the quality of the code
 isn't that great (many JavaScript programmers compound the flaws of the
 Browser by not understanding how to effectively program against the Browser
 model), and it also misunderstands distributed computing.  It encourages
 writing applications as if they were still single-tier IBM computers from
 the 1970s/80s costing thousands of dollars.

 Why are we stuck with such poor architecture?


Apologies if you have posted this before, but have you talked anywhere
in more detail about what the monsterous bug is (specifically), or
how programming for the web misunderstands distributed computing?

-- Dirk

 Cheers,
 Z-Bo

 On Fri, Oct 8, 2010 at 1:51 PM, Waldemar Kornewald wkornew...@freenet.de
 wrote:


  I am wondering if there is some value in reviving the idea for
  JavaScript?
 
  Firebug shows what is possible as a sort of computing microscope for
  JavaScript and HTML and CSS. Sencha Ext Designer shows what is possible
  as
  far as interactive GUI design.

 What exactly does JavaScript give you that you don't get with Python?

 If you want to have prototypes then JavaScript is probably the worst
 language you can pick. You can't specify multiple delegates and you
 can't change the delegates at runtime (unless your browser supports
 __proto__, but even then you can only have one delegate). Also, as a
 language JavaScript is just not as powerful as Python. If all you want
 is a prototypes implementation that doesn't require modifications to
 the interpreter then you can get that with Python, too (*without*
 JavaScript's delegation limitations).

 Bye,
 Waldemar

 --
 Django on App Engine, MongoDB, ...? Browser-side Python? It's open-source:
 http://www.allbuttonspressed.com/

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-08 Thread Dirk Pranke
On Fri, Oct 8, 2010 at 2:04 PM, Paul D. Fernhout
pdfernh...@kurtz-fernhout.com wrote:
 It's totally stupid to use JavaScript as a VM for world peace since it
 would be a lot better if every web page ran in its own well-designed VM and
 you could create content that just compiled to the VM, and the VMs had some
 sensible and secure way to talk to each other and respect each other's
 security zones in an intrinsically and mutually secure way. :-)
  Stating the 'bleeding' obvious (security is cultural)
  http://groups.google.com/group/diaspora-dev/msg/17cf35b6ca8aeb00


You are describing something that is not far from Chrome's actual
design. It appears that the other browser vendors are moving in
similar directions. Are you familiar with it? Do you care to elaborate
(off-list, if you like) on what the differences between what Chrome
does and what you'd like are (apart from the JavaScript VM being not
particularly designed for anything other than JavaScript)?

-- Dirk

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-08 Thread Casey Ransberger
I think type is a foundationaly bad idea. What matters is that the object in 
question can respond intelligently to the message you're passing it. Or at 
least, that's what I think right now, anyway. It seems like type specification 
(and as such, early binding) have a very limited real use in the domain of 
really-actually-for-real-and-seriously mission critical systems, like those 
that guide missiles or passenger planes. 

In the large though, it really seems like specifying type is a lot of 
ceremonial overhead if what you need to say is really just some arguments to 
some function, or pass a message to some object.  

It might help if you explained what you meant by type. If you're thinking of 
using class as type, I expect you'll fail. Asking for an object's class in 
any case where one is not employing reflection to implement a tool for 
programmers reduces the power of polymorphism in your program. It can be argued 
easily that you shouldn't have to worry about type: you should be able to 
expect that your method's argument is something which sensibly implements a 
protocol that includes the message you're sending it. If you're talking about 
primitive types, e.g. a hardware integer/word, or a string as a series of 
bytes, then I suppose the conversation is different, right? Because if we're 
talking about machine primitives, we really aren't talking about objects at 
all, are we?

On Oct 8, 2010, at 3:23 PM, spir denis.s...@gmail.com wrote:

 On Fri, 8 Oct 2010 19:51:32 +0200
 Waldemar Kornewald wkornew...@freenet.de wrote:
 
 Hi,
 
 On Fri, Oct 8, 2010 at 5:20 PM, Paul D. Fernhout
 pdfernh...@kurtz-fernhout.com wrote:
 The PataPata project (by me) attempted to bring some ideas for Squeak and
 Self to Python about five years ago. A post mortem critique on it from four
 years ago:
  PataPata critique: the good, the bad, the ugly
  http://patapata.sourceforge.net/critique.html
 
 In that critique you basically say that prototypes *maybe* aren't
 better than classes, after all. On the other hand, it seems like most
 problems with prototypes weren't related to prototypes per se, but the
 (ugly?) implementation in Jython which isn't a real prototype-based
 language. So, did you have a fundamental problem with prototypes or
 was it more about your particular implementation?
 
 I have played with the design ( half-way) of a toy prototyped-based language 
 and ended thinking there is some semantic flaw in this paradigm. Namely, 
 models we need to express in programs constantly hold the notions of kinds 
 of similar elements. Which often are held in collections; collections and 
 types play together in my sense. In other words, type is a fondamental 
 modelling concept that should be a core feature of any language.
 Indeed, there are many ways to realise it concretely. In my sense, the notion 
 of prototype (at least in the sense of self or Io) is too weak and vague. For 
 instance, cloning does not help much in practice: programmers constantly 
 reinvent constructors, or even separated object creation and initialisation. 
 Having such features is conceptually helpful, practically secure, but most 
 importantly brings it as common wealth of the programming community (a 
 decisive argument for builtin features, imo).
 Conversely, class-based language miss the notion, freedom to create, of 
 individual objects. forcing the programmer to create a class for a chess 
 board is simply stupid for me, and worse: semantically wrong. This prevents 
 the program to mirror the model.
 
 Bye,
 Waldemar
 
 
 Denis
 -- -- -- -- -- -- --
 vit esse estrany ☣
 
 spir.wikidot.com
 
 
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-08 Thread Richard Karpinski
But wait. I think we need more complex types than are even allowed. When we
actually compute something on the back of an envelope, we have been taught
to carry all the units along explicitly, but when we set it up for a really
stupid computer to do it automatically, we are forbidden, almost always,
from even mentioning the units. This seems quite reckless to me.

Why does it seem that no one cares that there are vast flaws in virtually
every project?

Why is no one concerned, in the most modern development techniques, with
testing, step by step, what actually works with the people who have to use
the resulting system?

Why does the whole industry accept as normal that most large projects fail,
and even those said to have succeeded are filled with cruft that never gets
used?

Aside from W. Edwards Deming, Tom Gilb, Jef Raskin, and me, I guess
everybody thinks that as long as they get paid, everything is fine.

Richard

On Fri, Oct 8, 2010 at 7:57 PM, Casey Ransberger
casey.obrie...@gmail.comwrote:

 I think type is a foundationaly bad idea. What matters is that the object
 in question can respond intelligently to the message you're passing it. Or
 at least, that's what I think right now, anyway. It seems like type
 specification (and as such, early binding) have a very limited real use in
 the domain of really-actually-for-real-and-seriously mission critical
 systems, like those that guide missiles or passenger planes.

 In the large though, it really seems like specifying type is a lot of
 ceremonial overhead if what you need to say is really just some arguments to
 some function, or pass a message to some object.

 It might help if you explained what you meant by type. If you're thinking
 of using class as type, I expect you'll fail. Asking for an object's class
 in any case where one is not employing reflection to implement a tool for
 programmers reduces the power of polymorphism in your program. It can be
 argued easily that you shouldn't have to worry about type: you should be
 able to expect that your method's argument is something which sensibly
 implements a protocol that includes the message you're sending it. If you're
 talking about primitive types, e.g. a hardware integer/word, or a string as
 a series of bytes, then I suppose the conversation is different, right?
 Because if we're talking about machine primitives, we really aren't talking
 about objects at all, are we?

 On Oct 8, 2010, at 3:23 PM, spir denis.s...@gmail.com wrote:

  On Fri, 8 Oct 2010 19:51:32 +0200
  Waldemar Kornewald wkornew...@freenet.de wrote:
 
  Hi,
 
  On Fri, Oct 8, 2010 at 5:20 PM, Paul D. Fernhout
  pdfernh...@kurtz-fernhout.com wrote:
  The PataPata project (by me) attempted to bring some ideas for Squeak
 and
  Self to Python about five years ago. A post mortem critique on it from
 four
  years ago:
   PataPata critique: the good, the bad, the ugly
   http://patapata.sourceforge.net/critique.html
 
  In that critique you basically say that prototypes *maybe* aren't
  better than classes, after all. On the other hand, it seems like most
  problems with prototypes weren't related to prototypes per se, but the
  (ugly?) implementation in Jython which isn't a real prototype-based
  language. So, did you have a fundamental problem with prototypes or
  was it more about your particular implementation?
 
  I have played with the design ( half-way) of a toy prototyped-based
 language and ended thinking there is some semantic flaw in this paradigm.
 Namely, models we need to express in programs constantly hold the notions of
 kinds of similar elements. Which often are held in collections;
 collections and types play together in my sense. In other words, type is a
 fondamental modelling concept that should be a core feature of any language.
  Indeed, there are many ways to realise it concretely. In my sense, the
 notion of prototype (at least in the sense of self or Io) is too weak and
 vague. For instance, cloning does not help much in practice: programmers
 constantly reinvent constructors, or even separated object creation and
 initialisation. Having such features is conceptually helpful, practically
 secure, but most importantly brings it as common wealth of the programming
 community (a decisive argument for builtin features, imo).
  Conversely, class-based language miss the notion, freedom to create, of
 individual objects. forcing the programmer to create a class for a chess
 board is simply stupid for me, and worse: semantically wrong. This prevents
 the program to mirror the model.
 
  Bye,
  Waldemar
 
 
  Denis
  -- -- -- -- -- -- --
  vit esse estrany ☣
 
  spir.wikidot.com
 
 
  ___
  fonc mailing list
  fonc@vpri.org
  http://vpri.org/mailman/listinfo/fonc

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc




-- 
Richard Karpinski, Nitpicker extraordinaire
148 Sequoia Circle,
Santa Rosa, CA