Re: [fonc] Current topics

2013-01-03 Thread Simon Forman
On Wed, Jan 2, 2013 at 10:35 PM, BGB cr88...@gmail.com wrote:
 On 1/2/2013 10:31 PM, Simon Forman wrote:

 On Tue, Jan 1, 2013 at 7:53 AM, Alan Kay alan.n...@yahoo.com wrote:

 The most recent discussions get at a number of important issues whose
 pernicious snares need to be handled better.

 In an analogy to sending messages most of the time successfully through
 noisy channels -- where the noise also affects whatever we add to the
 messages to help (and we may have imperfect models of the noise) -- we
 have
 to ask: what kinds and rates of error would be acceptable?

 We humans are a noisy species. And on both ends of the transmissions. So
 a
 message that can be proved perfectly received as sent can still be
 interpreted poorly by a human directly, or by software written by humans.

 A wonderful specification language that produces runable code good
 enough
 to make a prototype, is still going to require debugging because it is
 hard
 to get the spec-specs right (even with a machine version of human level
 AI
 to help with larger goals comprehension).

 As humans, we are used to being sloppy about message creation and
 sending,
 and rely on negotiation and good will after the fact to deal with errors.

 We've not done a good job of dealing with these tendencies within
 programming -- we are still sloppy, and we tend not to create negotiation
 processes to deal with various kinds of errors.

 However, we do see something that is actual engineering -- with both
 care
 in message sending *and* negotiation -- where eventual failure is not
 tolerated: mostly in hardware, and in a few vital low-level systems which
 have to scale pretty much finally-essentially error-free such as the
 Ethernet and Internet.

 My prejudices have always liked dynamic approaches to problems with error
 detection and improvements (if possible). Dan Ingalls was (and is) a
 master
 at getting a whole system going in such a way that it has enough
 integrity
 to exhibit its failures and allow many of them to be addressed in the
 context of what is actually going on, even with very low level failures.
 It
 is interesting to note the contributions from what you can say statically
 (the higher the level the language the better) -- what can be done with
 meta (the more dynamic and deep the integrity, the more powerful and
 safe
 meta becomes) -- and the tradeoffs of modularization (hard to sum up,
 but
 as humans we don't give all modules the same care and love when designing
 and building them).

 Mix in real human beings and a world-wide system, and what should be
 done?
 (I don't know, this is a question to the group.)

 There are two systems I look at all the time. The first is lawyers
 contrasted with engineers. The second is human systems contrasted with
 biological systems.

 There are about 1.2 million lawyers in the US, and about 1.5 million
 engineers (some of them in computing). The current estimates of
 programmers
 in the US are about 1.3 million (US Dept of Labor counting programmers
 and
 developers). Also, the Internet and multinational corporations, etc.,
 internationalizes the impact of programming, so we need an estimate of
 the
 programmers world-wide, probably another million or two? Add in the ad
 hoc
 programmers, etc? The populations are similar in size enough to make the
 contrasts in methods and results quite striking.

 Looking for analogies, to my eye what is happening with programming is
 more
 similar to what has happened with law than with classical engineering.
 Everyone will have an opinion on this, but I think it is partly because
 nature is a tougher critic on human built structures than humans are on
 each
 other's opinions, and part of the impact of this is amplified by the
 simpler
 shorter term liabilities of imperfect structures on human safety than on
 imperfect laws (one could argue that the latter are much more of a
 disaster
 in the long run).

 And, in trying to tease useful analogies from Biology, one I get is that
 the
 largest gap in complexity of atomic structures is the one from polymers
 to
 the simplest living cells. (One of my two favorite organisms is
 Pelagibacter
 unique, which is the smallest non-parasitic standalone organism.
 Discovered
 just 10 years ago, it is the most numerous known bacterium in the world,
 and
 accounts for 25% of all of the plankton in the oceans. Still it has about
 1300+ genes, etc.)

 What's interesting (to me) about cell biology is just how much stuff is
 organized to make integrity of life. Craig Ventor thinks that a minimal
 hand-crafted genome for a cell would still require about 300 genes (and a
 tiniest whole organism still winds up with a lot of components).

 Analogies should be suspect -- both the one to the law, and the one here
 should be scrutinized -- but this one harmonizes with one of Butler
 Lampson's conclusions/prejudices: that you are much better off making --
 with great care -- a few kinds of relatively big modules as basic
 building

Re: [fonc] Current topics

2013-01-03 Thread Alan Kay
Hi David

I think both of your essays are important, as is the general style of 
aspiration.

The ingredients of a soup idea is one of the topics we were supposed to work 
on in the STEPS project, but it counts as a shortfall: we wound up using our 
time on other parts. We gesture at it in some of the yearly reports.

The thought was that a kind of semantic publish and subscribe scheme -- that 
dealt in descriptions and avoided having to know names of functionalities as 
much as possible -- would provide a very scalable loose coupling mechanism. We 
were hoping to get beyond the pitfalls of attempts at program synthesis from 
years ago that used pre-conditions and post-conditions to help matchers paste 
things together.


I'm hoping that you can cast more light on this area. One of my thoughts is 
that a good matcher might be more like a dynamic discovery system (e.g. 
Lenat's Eurisko) than a simple matcher 

It's interesting to think of what the commonalities of such a system should be 
like. A thought here was that a suitable descriptive language would be could be 
should be lots smaller and simpler than a set of standard conventions and 
tags for functionality

Joe Goguen was a good friend of mine, and his early death was a real tragedy. 
As you know, he spent many years trying to find sweet spots in formal semantics 
that could also be used in practical ways...

Best wishes,

Alan




 From: David Barbour dmbarb...@gmail.com
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Wednesday, January 2, 2013 11:09 PM
Subject: Re: [fonc] Current topics
 

On Tue, Jan 1, 2013 at 7:53 AM, Alan Kay alan.n...@yahoo.com wrote:

As humans, we are used to being sloppy about message creation and sending, and 
rely on negotiation and good will after the fact to deal with errors. 


You might be interested in my article on avoiding commitment in HCI, and its 
impact on programming languages. I address some issues of negotiation and 
clarification after-the-fact. I'm interested in techniques that might make 
this property more systematic and compositional, such as modeling messages or 
signals as having probabilistic meanings in context.




you are much better off making -- with great care -- a few kinds of 
relatively big modules as basic building blocks than to have zillions of 
different modules being constructed by vanilla programmers


Large components are probably a good idea if humans are hand-managing the glue 
between them. But what if there was another way? Instead of modules being 
rigid components that we painstakingly wire together, they can be ingredients 
of a soup - with the melding and combination process being largely automated.


If the modules are composed automatically, they can become much smaller, more 
specialized and reusable. Large components require a lot of inefficient 
duplication of structure and computation (seen even in biology).


 


Note that desires for runable specifications, etc., could be quite harmonious 
with a viable module scheme that has great systems integrity.


Certainly. Before his untimely departure, Joseph Goguen was doing a lot of 
work on modular, runable specifications (the BOBJ - behavioral OBJ - language, 
like a fusion of OOP and term rewriting). 
 
Regards,


Dave

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Current topics

2013-01-03 Thread Miles Fidelman

BGB wrote:

Whoa, I think you just invented nanotech organelles, at least this

is the first time I've heard that idea and it seems pretty
mind-blowing.  What would a cell use a cpu for?


mostly so that microbes could be programmed in a manner more like 
larger-scale computers.


say, the microbe has its basic genome and capabilities, which can be 
treated more like hardware, and then a person can write behavioral 
programs in a C-like language or similar, and then compile them and 
run them on the microbes.


for larger organisms, possibly the cells could network together and 
form into a sort of biological computer, then you can possibly have 
something the size of an insect with several GB of storage and 
processing power rivaling a modern PC, as well as possibly other 
possibilities, such as the ability to communicate via WiFi or similar.


you might want to google biological computing - you'll start finding 
things like this:
http://www.guardian.co.uk/science/blog/2009/jul/24/bacteria-computer 
(title: Bacteria make computers look like pocket calculators)




alternatively could be the possibility of having an organism with more 
powerful neurons, such that rather than neurons communicating via 
simple impulses, they can send more complex messages (neuron fires 
with extended metadata, ...). then neurons can make more informed 
decisions about whether to fire off a message.



cells do lots of nifty stuff, but most of their functionality is more 
based around cellular survival than about computational tasks.


Ummm have you heard of:

1. Brains (made up of cells),

2. Our immune systems,

3. The complex behaviors of fungi

Think massively parallel/distributed  computation focused on organism 
level survival and behavior.  If you want to program colonies of nano 
machines (biological or otherwise), you're going to have to start 
thinking of something a very different kinds of algorithms, running on 
something a lot more powerful than a small cpu programmed in c.


Start thinking billions of actors, running on highly parallel hardware, 
and we might start approaching what cells do today.  (FYI, try googling 
micro-tubules and you'll find some interesting papers on how these 
sub-cellular structures just might act like associative arrays :-)


Cheers,

Miles Fidelman





so, you have microbes that eat things, or produce useful byproducts, 
but none that actually accomplish specific tasks or perform actions 
on-command (such as, say, microbes which build things).


like, say, you could have a tub of gloop, which when instructed, could 
make objects like cell-phones, ..., which a person can make use of, 
and when done using it, they can put it back into the tub and issue a 
command and the organic gloop would decompose it again (back into raw 
materials).


then, if it runs low on raw materials, you can dump in some gloop 
food, which gives it more of the things it needs both to survive and 
to build more stuff.



potentially, you could also make things like essentially living 
robots, which can perform usual robotic tasks, but which can 
replicate themselves and heal damage more like living organisms, solar 
power plants that are, essentially, giant plants, ...



although, yes, some of this does bring up the possibility that 
scary/nasty stuff could also be possible, like, say, living buildings 
which eat their occupants, mass replication of near-indestructible 
critters, random emergence of things like the blob, ... (some could 
arise either by accident, such as genetic mutations leading to 
malfunction, or by deliberate acts, such as sabotage or weaponized 
critters).




I was being more metaphorical in my first post.  I've been looking at
an algorithmic biology website (
http://algorithmicbotany.org/papers/ ) and thinking about L-systems
that, instead of being geometrical, instead are somehow mapping into
spaces of...  I don't know exactly how to explain it. They've got
dynamic models mimicking forest growth by modelling growth and light
availability and such.  What if, for example, your distributed
application could use something like this to allocate nodes or other
resources dynamically in response to usage and available resources?

It seems like a start on how to think about a program/OS that can
cooperate with copies of itself to cope dynamically with changing
conditions and inputs.


yep, fair enough...


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Current topics

2013-01-03 Thread BGB

On 1/3/2013 7:27 PM, Miles Fidelman wrote:

BGB wrote:

Whoa, I think you just invented nanotech organelles, at least this

is the first time I've heard that idea and it seems pretty
mind-blowing.  What would a cell use a cpu for?


mostly so that microbes could be programmed in a manner more like 
larger-scale computers.


say, the microbe has its basic genome and capabilities, which can be 
treated more like hardware, and then a person can write behavioral 
programs in a C-like language or similar, and then compile them and 
run them on the microbes.


for larger organisms, possibly the cells could network together and 
form into a sort of biological computer, then you can possibly have 
something the size of an insect with several GB of storage and 
processing power rivaling a modern PC, as well as possibly other 
possibilities, such as the ability to communicate via WiFi or similar.


you might want to google biological computing - you'll start finding 
things like this:
http://www.guardian.co.uk/science/blog/2009/jul/24/bacteria-computer 
(title: Bacteria make computers look like pocket calculators)




FWIW: this is like comparing a fire to an electric motor.


yes, but you can't use a small colony of bacteria to do something like 
drive an XBox360, they just don't work this way.


with bacteria containing CPUs, you could potentially do so.

and, by the time you got up to a colony the size of an XBox360, the 
available processing power would be absurd...



this is not a deficiency of the basic biological mechanisms (which are 
in-fact quite powerful), but rather their inability to readily organize 
themselves into a larger-scale computational system.





alternatively could be the possibility of having an organism with 
more powerful neurons, such that rather than neurons communicating 
via simple impulses, they can send more complex messages (neuron 
fires with extended metadata, ...). then neurons can make more 
informed decisions about whether to fire off a message.



cells do lots of nifty stuff, but most of their functionality is more 
based around cellular survival than about computational tasks.


Ummm have you heard of:

1. Brains (made up of cells),

2. Our immune systems,

3. The complex behaviors of fungi



yes, but obseve just how pitifully these things do *at* traditional 
computational tasks...


for all the raw power in something like the human brain, and the ability 
of humans to possess things like general intelligence, ..., we *still* 
have to single-step in a stupid graphical debugger and require *hours* 
to think about and write chunks of code (and weeks or months to write a 
program), and a typical human can barely even add or subtract numbers in 
a reasonable time-frame (with the relative absurdity that, with all 
their raw power, a human finds it easier just to tap the calculation 
into a calculator, in the first place).



meanwhile, a C compiler can churn through and compile around a million 
lines of code in around 1 minute or so, a task for which a human has no 
hope to even attempt.


something is clearly deficient for the human mind at this task.


Think massively parallel/distributed  computation focused on organism 
level survival and behavior.  If you want to program colonies of nano 
machines (biological or otherwise), you're going to have to start 
thinking of something a very different kinds of algorithms, running on 
something a lot more powerful than a small cpu programmed in c.




I am thinking of billions of small CPUs programmed in C, and probably 
organized into micrometer or millimeter scale networks. there would be a 
reason why each cell would have its own CPU (built out of basic 
biological components).


also, humans would probably use a C-like language mostly because it 
would be most familiar, but need not be executed exactly like how it 
would on a modern computer (they may or may not have an ISA as would be 
currently understood).


probably these would need to mesh together somehow and simulate the 
functionality of larger computers, and would likely work by distributing 
computation and memory storage among individual cells.


even if the signaling and organization is moderately inefficient, likely 
it could be made up for by using redundancy and bulk.



similarly, tasks that would, at the larger scale, be accomplished via 
robots and bulk mechanical forces, could be performed instead by 
cooperative actions by individual cells (say, millions of cells all push 
on something in the same direction at the same time, or they start 
building a structure by secreting specific chemicals at specific 
locations, ...).



Start thinking billions of actors, running on highly parallel 
hardware, and we might start approaching what cells do today.  (FYI, 
try googling micro-tubules and you'll find some interesting papers 
on how these sub-cellular structures just might act like associative 
arrays :-)




they don't do the same things...

as-noted,