Re: [fonc] Final STEP progress report abandoned?

2013-09-08 Thread Paul Homer
Hi Alan,

I agree that there is, and probably will always be, a necessity to 'think 
outside of the box', although if the box was larger, it would be less 
necessary. But I wasn't really thinking about scientists and the pursuit of new 
knowledge, but rather the trillions? of mundane decisions that people regularly 
make on a daily basis. 

A tool like Wikipedia really helps in being able to access a refined chunk of 
knowledge, but the navigation and categorization are statically defined. 
Sometimes what I am trying to find is spread horizontally across a large number 
of pages. If, as a simple example, a person could have a dynamically generated 
Wikipedia page created just for them that factored in their current knowledge 
and the overall context of the situation then they'd be able to utilize that 
knowledge more appropriately. They could still choose to skim or ignore it, but 
if they wanted a deeper understanding, they could read the compiled research in 
a few minutes. 

The Web, particularly for programmers, has been a great tease for this. You can 
look up any coding example instantly (although you do have to sort through the 
bad examples and misinformation). The downside is that I find it far more 
common for people to not really understanding what is actually happening 
underneath, but I suspect that that is driven by increasing time pressures and 
expectations rather than but a shift in the way we relate to knowledge.

What I think would really help is not just to allow access to the breadth of 
knowledge, but to also enable individuals to get to the depth as well. Also the 
ability to quickly recognize lies, myths, propaganda, etc. 

Paul.

Sent from my iPad

On 2013-09-08, at 7:12 AM, Alan Kay alan.n...@yahoo.com wrote:

 Hi Paul
 
 I'm sure you are aware that yours is a very Engelbartian point of view, and 
 I think there is still much value in trying to make things better in this 
 direction.
 
 However, it's also worth noting the studies over the last 40 years (and 
 especially recently) that show how often even scientists go against their 
 training and knowledge in their decisions, and are driven more by desire and 
 environment than they realize. More knowledge is not the answer here -- but 
 it's possible that very different kinds of training could help greatly.
 
 Best wishes,
 
 Alan
 
 From: Paul Homer paul_ho...@yahoo.ca
 To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
 fonc@vpri.org; Fundamentals of New Computing fonc@vpri.org 
 Sent: Saturday, September 7, 2013 12:24 PM
 Subject: Re: [fonc] Final STEP progress report abandoned?
 
 Hi Alan,
 
 I can't predict what will come, but I definitely have a sense of where I 
 think we should go. Collectively as a species, we know a great deal, but 
 individually people still make important choices based on too little 
 knowledge. 
 
 In a very abstract sense 'intelligence' is just a more dynamic offshoot of 
 'evolution'. A sort of hyper-evolution. It allows a faster route towards 
 reacting to changes in the enviroment, but it is still very limited by 
 individual perspectives of the world. I don't think we need AI in the classic 
 Hollywood sense, but we could enable a sort of hyper-intelligence by giving 
 people easily digestable access to our collective understanding. Not a 'borg' 
 style single intelligence, but rather just the tools that can be used to make 
 descisions that are more accurate than an individual would have made 
 normally. 
 
 To me the path to get there lies within our understanding of data. It needs 
 to be better organized, better understood and far more accessible. It can't 
 keep getting caught up in silos, and it really needs ways to share it 
 appropriately. The world changes dramatically when we've developed the 
 ability to fuse all of our digitized information into one great structural 
 model that has the capability to separate out fact from fiction. It's a long 
 way off, but I've always thought it was possible...
 
 Paul.
 
 From: Alan Kay alan.n...@yahoo.com
 To: Fundamentals of New Computing fonc@vpri.org 
 Sent: Tuesday, September 3, 2013 7:48:22 AM
 Subject: Re: [fonc] Final STEP progress report abandoned?
 
 Hi Jonathan
 
 We are not soliciting proposals, but we like to hear the opinions of others 
 on burning issues and better directions in computing.
 
 Cheers,
 
 Alan
 
 From: Jonathan Edwards edwa...@csail.mit.edu
 To: fonc@vpri.org 
 Sent: Tuesday, September 3, 2013 4:44 AM
 Subject: Re: [fonc] Final STEP progress report abandoned?
 
 That's great news! We desperately need fresh air. As you know, the way a 
 problem is framed bounds its solutions. Do you already know what problems to 
 work on or are you soliciting proposals?
 
 Jonathan
 
 
 From: Alan Kay alan.n...@yahoo.com
 To: Fundamentals of New Computing fonc@vpri.org
 Cc: 
 Date: Mon, 2 Sep 2013 10:45:50 -0700 (PDT)
 Subject: Re: [fonc] Final STEP progress report abandoned?
 Hi Dan
 
 It actually got written and given

Re: [fonc] [tt] Final STEP progress report abandoned?

2013-09-08 Thread Paul Homer
Hi Michael,

I was really thinking of something deeper than a Delphi-style repository. Often 
I've had to negotiate between two diametrically opposed groups. I do this by 
resorting to what a mentor once taught me as 'mange by fact'. If you strip away 
the emotions, opinions, assumptions, etc. you can often find commonality at a 
basic factual level. Once you've gotten there, you just work your way back up 
to a agreement of some sort. I prefer coding, but it's a useful skill :-)

As it works well to solve sever disagreements, a larger computer enabled fact 
repository might also make the world a happier place. Perhaps.

Paul.

Sent from my iPad

On 2013-09-08, at 9:32 AM, Michael Turner michael.eugene.tur...@gmail.com 
wrote:

 To me the path to get there lies within our understanding of data. It
 needs to be better organized, better understood and far more
 accessible. It can't keep getting caught up in silos, and it really
 needs ways to share it appropriately. The world changes dramatically
 when we've developed the ability to fuse all of our digitized
 information into one great structural model that has the capability to
 separate out fact from fiction. It's a long way off, but I've always
 thought it was possible...
 
 Not to be crass here, but incentives matter. And appropriate sharing
 is very much in the eye of the beholder. This is why prediction
 markets often work better than Delphi-style expert-opinion-gathering.
 Talk is cheap. To separate out fact from fiction is expensive. You
 have to make it worth people's time. And you have to make the answers
 matter to those offering them, in a way that future discounting can't
 dent much. In Delphi, you can always shrug and say, the other experts
 were just as wrong as I was. And your reputation is still secure.
 With prediction markets, there's an automatic withdrawal from your
 bank account, as well as from the accounts of all the other experts
 who were wrong.
 
 The Engelbart vision (also the Vannevar Bush vision) was incubated
 very much within a government-industry complex, where realistic
 organizational imperatives are limited by the forces that
 bureaucracies can marshal. Beyond a certain scale, that model falls
 prey to inevitable bureaucratic infighting, contention over resources,
 indefeasible claims to having the better team. If we have less unity
 in the government-industry informatics mission now, it's probably
 because the backdrop is a silly War on Terror, rather than the
 conflict that was contemporary for Engelbart and V. Bush: the somewhat
 more solidly grounded Cold War.
 
 Truth is not a fortress whose walls you can scale by piling up
 soldiers anyway. Sharper eyes and minds are not for sale, if it's only
 to march in one of Matthew Anold's ignorant armies that clash by
 night. For some information-aggregation problems, you need the sniper
 you can't see yet, until there's a muzzle flash. At which point, if
 you're the one who's wrong, well, too late for you! C'est la guerre.
 
 Regards,
 Michael Turner
 Executive Director
 Project Persephone
 K-1 bldg 3F
 7-2-6 Nishishinjuku
 Shinjuku-ku Tokyo 160-0023
 Tel: +81 (3) 6890-1140
 Fax: +81 (3) 6890-1158
 Mobile: +81 (90) 5203-8682
 tur...@projectpersephone.org
 http://www.projectpersephone.org/
 
 Love does not consist in gazing at each other, but in looking outward
 together in the same direction. -- Antoine de Saint-Exupéry
 
 
 On Sun, Sep 8, 2013 at 8:12 PM, Alan Kay alan.n...@yahoo.com wrote:
 Hi Paul
 
 I'm sure you are aware that yours is a very Engelbartian point of view,
 and I think there is still much value in trying to make things better in
 this direction.
 
 However, it's also worth noting the studies over the last 40 years (and
 especially recently) that show how often even scientists go against their
 training and knowledge in their decisions, and are driven more by desire and
 environment than they realize. More knowledge is not the answer here -- but
 it's possible that very different kinds of training could help greatly.
 
 Best wishes,
 
 Alan
 
 
 From: Paul Homer paul_ho...@yahoo.ca
 To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing
 fonc@vpri.org; Fundamentals of New Computing fonc@vpri.org
 Sent: Saturday, September 7, 2013 12:24 PM
 Subject: Re: [fonc] Final STEP progress report abandoned?
 
 Hi Alan,
 
 I can't predict what will come, but I definitely have a sense of where I
 think we should go. Collectively as a species, we know a great deal, but
 individually people still make important choices based on too little
 knowledge.
 
 In a very abstract sense 'intelligence' is just a more dynamic offshoot of
 'evolution'. A sort of hyper-evolution. It allows a faster route towards
 reacting to changes in the enviroment, but it is still very limited by
 individual perspectives of the world. I don't think we need AI in the
 classic Hollywood sense, but we could enable a sort of hyper-intelligence by
 giving people

Re: [fonc] Final STEP progress report abandoned?

2013-09-07 Thread Paul Homer
Hi Alan,

I can't predict what will come, but I definitely have a sense of where I think 
we should go. Collectively as a species, we know a great deal, but individually 
people still make important choices based on too little knowledge. 


In a very abstract sense 'intelligence' is just a more dynamic offshoot of 
'evolution'. A sort of hyper-evolution. It allows a faster route towards 
reacting to changes in the enviroment, but it is still very limited by 
individual perspectives of the world. I don't think we need AI in the classic 
Hollywood sense, but we could enable a sort of hyper-intelligence by giving 
people easily digestable access to our collective understanding. Not a 'borg' 
style single intelligence, but rather just the tools that can be used to make 
descisions that are more accurate than an individual would have made 
normally. 


To me the path to get there lies within our understanding of data. It needs to 
be better organized, better understood and far more accessible. It can't keep 
getting caught up in silos, and it really needs ways to share it appropriately. 
The world changes dramatically when we've developed the ability to fuse all of 
our digitized information into one great structural model that has the 
capability to separate out fact from fiction. It's a long way off, but I've 
always thought it was possible...

Paul.





 From: Alan Kay alan.n...@yahoo.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Tuesday, September 3, 2013 7:48:22 AM
Subject: Re: [fonc] Final STEP progress report abandoned?
 


Hi Jonathan


We are not soliciting proposals, but we like to hear the opinions of others on 
burning issues and better directions in computing.


Cheers,


Alan




 From: Jonathan Edwards edwa...@csail.mit.edu
To: fonc@vpri.org 
Sent: Tuesday, September 3, 2013 4:44 AM
Subject: Re: [fonc] Final STEP progress report abandoned?
 


That's great news! We desperately need fresh air. As you know, the way a 
problem is framed bounds its solutions. Do you already know what problems to 
work on or are you soliciting proposals?


Jonathan



From: Alan Kay alan.n...@yahoo.com
To: Fundamentals of New Computing fonc@vpri.org
Cc: 
Date: Mon, 2 Sep 2013 10:45:50 -0700 (PDT)
Subject: Re: [fonc] Final STEP progress report abandoned?

Hi Dan


It actually got written and given to NSF and approved, etc., a while ago, but 
needs a little more work before posting on the VPRI site. 


Meanwhile we've been consumed by setting up a number of additional, and wider 
scale, research projects, and this has occupied pretty much all of my time 
for the last 5-6 months.


Cheers,


Alan




 From: Dan Melchione dm.f...@melchione.com
To: fonc@vpri.org 
Sent: Monday, September 2, 2013 10:40 AM
Subject: [fonc] Final STEP progress report abandoned?
 


Haven't seen much regarding this for a while.  Has it been been abandoned or 
put at such low priority that it is effectively abandoned?
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Current topics

2013-01-01 Thread Paul Homer
My thinking has been going the other way for some time now. I see the problem 
as the need to build bigger systems than any individual can currently imagine. 
The real value from computers isn#39;t just collecting the input from a single 
person, but rather #39;combining#39; the inputs from huge groups of people. 
It#39;s that ability to unify and harmonize our collective knowledge that 
gives us a leg up on being able to rationalize our rather over-complicated 
world. 

The problem I see with components, partically a small set of large ones, is 
that as the size of a formal system increases, the possible variations explode. 
That is, if we consider a nearly trival small set of primitives, there are 
several different possible decompositions. As the size of the system grows, the 
number of decompositions grows probably exponentially or better. Thus as we 
walk up the levels of abstraction to something higher, there becomes a much 
larger set of possibilities. If what we desire is beyond any individuals 
comprehension, and there is a huge variance in the pieces that will get 
created, then we#39;ll run into considerable problems when we try to bring all 
of these pieces together. That I think is esentially where we are currently.

My sense of the problem is to go the other way. To make the peices so trivial 
that they can be combined easily. It may sound labour intensive to bring it all 
together, but then we do have the ability of computers themselves to spend 
endless hours doing mundane chores for us. The trick then would be to engage as 
many people as possible in constructing these little pieces, then bring them 
all together. In a design sense, this is not substantally different than the 
Internet, or Wikipedia. These both grew organically out of relatively small 
pieces with minimal organization, yet somehow converged on an end-product that 
is considerably larger than any individual#39;s single effort.

Paul.___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing Userspace Bug - Slashdot

2012-12-31 Thread Paul Homer
My take is that crafting software is essentially creating a formal system. No 
doubt there is some clean, normalized way to construct each one, but given that 
the work is being done by humans, a large number of less than optimal elements 
find their way into the system. Since everyone is basically distinct, their own 
form of semi-normalization is unique. Get a bunch of these together in the same 
system and there will be inevitable clashes. But given that it#39;s often one 
variant of weirdness vs another, there is no basis for rational arguments, thus 
tempers and frustration flair. In the long run however, it#39;s best to pick 
one weirdness and stick with it (as far as it goes). We don#39;t yet have the 
knowledge or skills to harmonize these types of systems.

Paul.___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing Userspace Bug - Slashdot

2012-12-31 Thread Paul Homer
I don#39;t think a more formalized language really gets around the problem. If 
that were true, we#39;d have already fallen back to the most consistent, yet 
simple languages available, such as assembler. But on top of these we build 
significantly more complex systems, bent by our own internal variations on 
logic. It#39;s that layer that causes the problems. What seems like it might 
be successful is to pair our constructions with many languages that more 
closely match how people think. Now I know that sounds weird, but not if one 
accepts that a clunky, ugly language like COBOL was actually very successful. 
Lots of stuff was written, much of it still running. Its own excessive 
verbosity helps in making it fixable by a broader group of people. 

Of course there is still a huge problem with that idea. Once written, if the 
author is no longer available, the work effectively becomes frozen. It can be 
built upon, but it is hard to truly expand. Thus we get to what we have now, a 
rather massive house of cards that becomes ever more perilous to build upon. If 
we want to break the cycle, we have to choose a new option. The size of the 
work is beyond any individual#39;s capacity, combining different people#39;s 
work is prone to clashes, the more individualized we make the languages the 
harder they are to modify, and the more normalized we make the hard they are to 
use.

Paul.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing Userspace Bug - Slashdot

2012-12-31 Thread Paul Homer
Most programs are models of our irrational world. Reflections of rather 
informal systems that are inherently ambiguous and contradictory, just like our 
species. Nothing short of #39;intelligence#39; could validate that those 
types of rules match their intended usage in the real world. If we don#39;t 
build our internal systems models with this in mind, then they#39;d be too 
fragile to solve real problems for us. Like it or not, intelligence is a 
necessary ingredient, and we don#39;t yet have any alternatives but ourselves 
to fill it.

Paul.___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-04 Thread Paul Homer
That's a pretty good summary, but I'd avoid calling the 'glue' a search. If 
this were to work, it would be a deterministic algorithm that choose the best 
possible match given a set of input and output data-types (and avoided N^2 or 
higher processing). 


Given my user-screen based construction, it would be easy to do something like 
add a hook to display the full-set of transformations used to go from the 
persistent context to the user one. Click on any data and get the full list. I 
see the contexts as more or less containing the raw data*, and the 
transformations as sitting outside of that in the kernel (although they could 
be primed from a context, like any other data). I would expect the users might 
acquire many duplicate transformations that were only partially overlapping, so 
perhaps from any display, they could fiddle with the precedences. 


* Systems often need performance boosts at the cost of some other trade-off. 
For this, I could see specialized contexts that are basically per-calculated 
derived data or caches of other contexts. Basically any sort of memorization 
could be encapsulated into it's own context, leaving the original data in a raw 
state.


Configuration would again be data, grabbed from a context somewhere, as well as 
all of the presentation and window-dressing. Given that the user starts in a 
context (basically their home screen), they would always be rooted in someway. 
Another logical extension would be for the data to be 'things' that references 
other data in other contexts (recursively), in the same way that the web-based 
technologies work.

Threeother points I think worth noting are:

- All of the data issues (like ACID) are encapsulated within the 'context'.
- All of the flow issues (like distributed and concurrency) are encapsulated 
within the kernel.
- All of the formatting and domain issues are encapsulated within the 
transformations. 


That would make it fairly easy to know where to place or find any of the 
technical or domain issues.


Paul.





 From: David Barbour dmbarb...@gmail.com
To: Paul Homer paul_ho...@yahoo.ca; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Wednesday, October 3, 2012 7:10:53 PM
Subject: Re: [fonc] How it is
 

Distilling what you just said to its essence:
   * humans develop miniature dataflows
   * search algorithm automatically glues flows together
   * search goal is a data type
A potential issue is that humans - both engineers and end-users - will often 
want a fair amount of control over which translations and data sources are 
used, options for those translations, etc.. You need a good way to handle 
preferences, policy, configurations. 


I tend to favor soft constraints in those roles. I'm actually designing a 
module systems around the idea, and an implementation in Haskell (for RDP) 
using the plugins system and dynamic 
types. (Related: http://awelonblue.wordpress.com/2011/09/29/modularity-without-a-name/ , http://awelonblue.wordpress.com/2012/04/12/make-for-haskell-values-part-alpha/). 


Regards,


Dave

On Wed, Oct 3, 2012 at 3:33 PM, Paul Homer paul_ho...@yahoo.ca wrote:

I'm in a slightly different head-space with this idea. 



A URL for instance, is essentially an encoded set of instructions for 
navigating to somewhere and then if it is a GET, grabbing the associated 
data, lets say an image. If my theoretical user where to create a screen (or 
perhaps we could call it a visual context), they'd just drag-and-drop an 
image-type into the position they desired. They'd have to have some way of 
tying that to 'which image', but for simplicity lets just say that they 
already created something that allows them to search, and then list all of 
the images from a known database context, so that the 'which image' is 
cascaded down from their earlier work. Once they 'made the screen live' and 
searched and selected, the underlying code would essentially get a request 
for a data flow that specified the context (location), some 'type' 
information (an image) and a context-specific instance id (as passed in from 
the search and list). The kernel would then arrange for that data to be moved 
from
 where-ever it is (local or remote, but lets go with remote) and converted (if 
its base format was something the user's screen couldn't handle, say a custom 
bitmap). So along the way there might be a translation from one image format to 
another, and perhaps a 'compress and decompress' if the source is remote. 



That whole flow wouldn't be constructed by a programmer, just the 
translations, say bitmap-png, bits-compressed and compressed-bits. The 
kernel would work backwards, knowing that it needed an image in png format, 
and knowing that there exists base data stored in another context as a 
bitmap, and knowing that for large data it is generally cheaper to 
compress/decompress if the network is involved. The kernel would essentially 
know the absolute minimum about

Re: [fonc] How it is

2012-10-04 Thread Paul Homer
Anyhow, your vision seems young. It isn't the same as mine, but I don't 
want to discourage you.

Indeed. It's right out on the extreme edge; as data-centric as I can imagine. I 
wasn't really putting it out there with the intent to build it, but rather as 
just an example of heading away from the crowds. We often implicitly go towards 
finding higher and higher abstractions that can be used as a toolset for 
programmers building larger systems. This is the way I build commercial 
products right now, and it is also the way our languages and tools have 
evolved. But this always runs into at least two problems: a) higher 
abstractions are harder to learn, and b) abstractions can often be leaky. The 
first problem is exemplified by APL. In the hands of a master, I've seen 
amazing systems built rapidly. But it isn't the easiest language to learn, and 
some people never seem to get it. The second problem was described quite well 
by Joel Spolsky but I've never really been sure whether or not it is avoidable 
in some way. 


I don't really know if going this other way is workable, but sometimes it's 
just fun to explore the edges. These days I'm busy paying off the mortgage, 
writing, playing with math, traveling (not enough) and generally trying to keep 
my very old house (1904) from falling down, so it's unlikely that I'll get a 
chance to play around here in the near (100 years) future.


Paul.





 From: David Barbour dmbarb...@gmail.com
To: Paul Homer paul_ho...@yahoo.ca; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Thursday, October 4, 2012 4:12:34 PM
Subject: Re: [fonc] How it is
 

Don't get too handwavy about performance of the algorithm before you've 
implemented it!  The technique I'm using is definitely a search. The search is 
performed by the linker, which includes a constraint solver with 
exponential-time worst-case performance. This works out in practice because:
   * I can memoize or learn (machine learning) working solutions or 
 sub-solutions
   * I can favor stability and incrementally update a solution in the face 
 of changeOne concern I continuously return to is the apparent conflict of 
 stability vs. determinism. 


Suppose the available components and the context can both vary over time. 
Therefore, over time (e.g. minute to minute), the best configuration can 
change. How do you propose to handle this? Do you select the best 
configuration when the developer hits a button? Or when a user pushes a 
button? Do you reactively adapt the configuration to the resources available 
in the context? In the latter case, do you favor stability (which resources 
are selected) or do you favor quality (the best result at a given time)? How 
do you modulate between the two?


I've been exploring some of these issues with respect to stateless stability 
(http://awelonblue.wordpress.com/2012/03/14/stability-without-state/) and 
potential composition of state with stateless stability. 


I agree that configuration should be represented as a resource, but often the 
configuration problem is a configurations problem, i.e. plural, more than one 
configuration. You'll need to modularize your contexts quite a bit, which will 
return you to the issue of modeling access to contexts... potentially as yet 
another dataflow. 


Anyhow, your vision seems young. It isn't the same as mine, but I don't want 
to discourage you. Start hammering it out and try a prototype implementation.


Regards,


Dave


On Thu, Oct 4, 2012 at 11:22 AM, Paul Homer paul_ho...@yahoo.ca wrote:

That's a pretty good summary, but I'd avoid calling the 'glue' a search. If 
this were to work, it would be a deterministic algorithm that choose the best 
possible match given a set of input and output data-types (and avoided N^2 or 
higher processing). 



Given my user-screen based construction, it would be easy to do something 
like add a hook to display the full-set of transformations used to go from 
the persistent context to the user one. Click on any data and get the full 
list. I see the contexts as more or less containing the raw data*, and the 
transformations as sitting outside of that in the kernel (although they could 
be primed from a context, like any other data). I would expect the users 
might acquire many duplicate transformations that were only partially 
overlapping, so perhaps from any display, they could fiddle with the 
precedences. 



* Systems often need performance boosts at the cost of some other trade-off. 
For this, I could see specialized contexts that are basically per-calculated 
derived data or caches of other contexts. Basically any sort of memorization 
could be encapsulated into it's own context, leaving the original data in a 
raw state.



Configuration would again be data, grabbed from a context somewhere, as well 
as all of the presentation and window-dressing. Given that the user starts in 
a context (basically their home screen), they would always

Re: [fonc] How it is

2012-10-03 Thread Paul Homer
people will clean things up so long as they are sufficiently painful, but once 
this is achieved, people no longer care.

The idea I've been itching to try is to go backwards. Right now we use 
programmers to assemble larger and larger pieces of software, but as time goes 
on they get inconsistent and the inconsistencies propagate upwards. The result 
is that each new enhancement adds something, but also degenerates the overall 
stability. Eventually it all hits a ceiling.

If instead, programmers just built little pieces, and it was the computer 
itself that was responsible for assembling it all together into mega-systems, 
then we could reach scales that are unimaginable today. To do this of course, 
the pieces would have to be tightly organized. Contributors wouldn't have the 
freedom they do now, but that's a necessity to move from what is essentially a 
competitive environment to a cooperative one. Some very fixed rules become 
necessary.

One question that arises from this type of idea is whether or not it is even 
possible for a computer to assemble a massive working system from say, a 
billion little code fragments. Would it run straight into NP? In the general 
case I don't think the door could be closed on this, but realistically the 
assembling doesn't have to happen in real-time, most of it can be cached and 
reused. Also, the final system would be composed of many major pieces, each 
with their own breakdown into sub-pieces and each of those being also 
decomposable. Thus getting to a super-system could take a tremendous amount of 
time, but that could be distributed over a wide and mostly independent set of 
work. As it is, we have 1M+ programmers right now, most of whom are essentially 
writing the same things (just slightly different). With that type of man-power 
directed, some pretty cool things could be created.

Paul.






 From: BGB cr88...@gmail.com
To: fonc@vpri.org 
Sent: Tuesday, October 2, 2012 5:48:14 PM
Subject: Re: [fonc] How it is
 

On 10/2/2012 12:19 PM, Paul Homer wrote:

It always seems to be that each new generation of programmers goes straight 
for the low-hanging fruit, ignoring that most of it has already been solved 
many times over. Meanwhile the real problems remain. There has been progress, 
but over the couple of decades I've been working, I've always felt that it was 
'2 steps forward, 1.99 steps back. 



it depends probably on how one measures things, but I don't think it
is quite that bad.

more like, I suspect, a lot has to do with pain-threshold:
people will clean things up so long as they are sufficiently
painful, but once this is achieved, people no longer care.

the rest is people mostly recreating the past, often poorly, usually
under the idea this time we will do it right!, often without
looking into what the past technologies did or did not do well
engineering-wise.

or, they end up trying for something different, but usually this
turns out to be recreating something which already exists and turns
out to typically be a dead-end (IOW: where many have gone before,
and failed). often the people will think why has no one done it
before this way? but, usually they have, and usually it didn't turn
out well.

so, a blind rebuild starting from nothing probably wont achieve
much.
like, it requires taking account of history to improve on it
(classifying various options and design choices, ...).


it is like trying to convince other language/VM
designers/implementers that expecting the end programmer to have to
write piles of boilerplate to interface with C is a problem which
should be addressed, but people just go and use terrible APIs
usually based on registering the C callbacks with the VM (or they
devise something like JNI or JNA and congratulate themselves, rather
than being like this still kind of sucks).

though in a way it sort of makes sense:
many language designers end up thinking like this language will
replace C anyways, why bother to have a half-decent FFI?
whereas it is probably a minority position to design a language and
VM with the attitude C and C++ aren't going away anytime soon.


but, at least I am aware that most of my stuff is poor imitations of
other stuff, and doesn't really do much of anything actually
original, or necessarily even all that well, but at least I can try
to improve on things (like, rip-off and refine).

even, yes, as misguided and wasteful as it all may seem sometimes...


in a way it can be distressing though when one has created something
that is lame and ugly, but at the same time is aware of the various
design tradeoffs that has caused them to design it that way (like, a
cleaner and more elegant design could have been created, but might
have suffered in another way).

in a way, it is a slightly different experience I suspect...



Paul

Re: [fonc] How it is

2012-10-03 Thread Paul Homer
 software development, these 
would be very little pieces, and if they were shared are intrinsically reusable 
(and recombination).

So I'd basically go backwards :-) No higher abstractions and bigger pieces, but 
rather a sea of very little ones. It would be fun to try :-)


Paul.




 From: Loup Vaillant l...@loup-vaillant.fr
To: Paul Homer paul_ho...@yahoo.ca; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Wednesday, October 3, 2012 11:10:41 AM
Subject: Re: [fonc] How it is
 
De : Paul Homer paul_ho...@yahoo.ca

 If instead, programmers just built little pieces, and it was the
 computer itself that was responsible for assembling it all together into
 mega-systems, then we could reach scales that are unimaginable today.
 […]

Sounds neat, but I cannot visualize an instantiation of this.  Meaning,
I have no idea what assembling mechanisms could be used.  Could you
sketch a trivial example?

Loup.



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread Paul Homer
I think it's because that's what we've told them to ask for :-) 

In truth we can't actually program 'everything', I think that's a side-effect 
of Godel's incompleteness theorem. But if you were to take 'everything' as 
being abstract quantity, the more we write, the closer our estimation comes to 
being 'everything'. That perspective lends itself to perhaps measuring the 
current state of our industry by how much code we are writing right now. In the 
early years, we should be writing more and more. In the later years, less and 
less (as we get closer to 'everything'). My sense of the industry right now is 
that pretty much every year (factoring in the economy and the waxing or waning 
of the popularity of programming) we write more code than the year before. Thus 
we are only starting :-)

Paul.





 From: Pascal J. Bourguignon p...@informatimago.com
To: Paul Homer paul_ho...@yahoo.ca 
Cc: Fundamentals of New Computing fonc@vpri.org 
Sent: Wednesday, October 3, 2012 3:32:34 PM
Subject: Re: [fonc] How it is
 
Paul Homer paul_ho...@yahoo.ca writes:

 The on-going work to enhance the system would consistent of modeling data, 
 and creating
 transformations. In comparison to modern software development, these would 
 be very little
 pieces, and if they were shared are intrinsically reusable (and 
 recombination).

Yes, that gives L4Gs.  Eventually (when we'll have programmed
everything) all computing will be only done with L4Gs: managers
specifying their data flows.  

But strangely enough, users are always asking for new programs…  Is it
because we've not programmed every functions already, or because we will
never have them all programmed?


-- 
__Pascal Bourguignon__                    http://www.informatimago.com/
A bad day in () is better than a good day in {}.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-02 Thread Paul Homer
It always seems to be that each new generation of programmers goes straight for 
the low-hanging fruit, ignoring that most of it has already been solved many 
times over. Meanwhile the real problems remain. There has been progress, but 
over the couple of decades I've been working, I've always felt that it was '2 
steps forward, 1.99 steps back. 


Paul.





 From: John Pratt jpra...@gmail.com
To: fonc@vpri.org 
Sent: Tuesday, October 2, 2012 11:21:59 AM
Subject: [fonc] How it is
 
Basically, Alan Kay is too polite to say what
we all know to be the case, which is that things
are far inferior to where they could have been
if people had listened to what he was saying in the 1970's.

Inefficient chip architectures, bloated frameworks,
and people don't know at all.

It needs a reboot from the core, all of it, it's just that
people are too afraid to admit it.  New programming languages,
not aging things tied to the keyboard from the 1960's.

It took me 6 months to figure out how to write a drawing program
in cocoa, but a 16-year-old figured it out in the 1970's easily
with Smalltalk.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-18 Thread Paul Homer
This discussion has inspired me to try once again to express my sense of what I 
mean by complexity. It's probably too rambly for most people, but some may find 
it interesting:

http://theprogrammersparadox.blogspot.ca/2012/06/what-is-complexity.html

Paul.





 From: Miles Fidelman mfidel...@meetinghouse.net
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Saturday, June 16, 2012 3:20:22 PM
Subject: Re: [fonc] The Web Will Die When OOP Dies
 
BGB wrote:
 
 a problem is partly how exactly one defines complex:
 one definition is in terms of visible complexity, where basically adding a 
 feature causes code to become harder to understand, more tangled, ...
 
 another definition, apparently more popular among programmers, is to simply 
 obsess on the total amount of code in a project, and just automatically 
 assume that a 1 Mloc project is much harder to understand and maintain than 
 a 100 kloc project.

And there are functional and behavioral complexity - i.e., REAL complexity, in 
the information theory sense.

I expect that there is some correlation between minimizing visual complexity 
and lines of code (e.g., by using domain specific languages), and being able 
to deal with more complex problem spaces and/or develop more sophisticated 
approaches to problems.

Miles



-- In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread Paul Homer
I see something deeper in what Zed is saying. 

My first really strong experiences with programming came from the 
data-structures world in the late 80s at the University of Waterloo. There was 
an implicit view that one could decompose all problems into data-structures 
(and a few algorithms and a little bit of glue). My sense at the time was that 
the newly emerging concepts of OO were a way of entrenching this philosophy 
directly into the programming languages.

When applied to tasks like building window systems, OO is an incredibly 
powerful approach. If one matches what they are seeing on the screen with the 
objects they are building in the back, there is a strong one-to-one mapping 
that allows the programmer to rapidly diagnose problems at a speed that just 
wasn't possible before.

But for many of the things that I've built in the back-end I find that OO 
causes me to jump through what I think are artificial hoops. Over the years 
I've spent a lot of time pondering why. My underlying sense is that there are 
some fundamental dualities in computational machines. Static vs. dynamic. Data 
vs. code. Nouns vs. verbs. Location vs. time. It is possible, of course, to 
'cast' one onto the other, there are plenty of examples of 'jumping' 
particularly in languages wrt. nouns and verbs. But I think that decompositions 
become 'easier' for us to understand when we partition them along the 'natural' 
lines of what they are underneath.

My thinking some time ago as it applies to OO is that the fundamental 
primitive, an object, essentially mixes its metaphors (sort of). That is, it 
contains both code and data. I think it's this relatively simple point that 
underlies the problems that people have in grokking OO. What I've also found is 
that that wasn't there in that earlier philosophy at Waterloo. Sure there were 
atomic primitives attached to each data-structure, but the way we build 
heavy-duty mechanics was more often to push the 'actions' to something like an 
intermediary data-structure and then do a clean simple traversal to actuate it 
(like lisp), so fundamentally the static/dynamic duality was daintily skipped 
over.

It is far more than obvious that OO opened the door to allow massive systems. 
Theoretically they were possible before, but it gave us a way to manage the 
complexity of these beasts. Still, like all technologies, it comes with a 
built-in 'threshold' that imposes a limit on what we can build. If we are too 
exceed that, then I think we are in the hunt for the next philosophy and as Zed 
points out the ramification of finding it will cause yet another technological 
wave to overtake the last one.

Just my thoughts.


Paul.  ___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread Paul Homer
 fundamental. 


We need to reduce complexity at all levels and that includes the culture we 
swim in.


cheers,
-David Leibs


On Jun 15, 2012, at 10:58 AM, BGB wrote:

On 6/15/2012 12:27 PM, Paul Homer wrote: 
I wouldn't describe complexity as a problem, but rather an attribute of the 
universe we exist in, effecting everything from how we organize our societies 
to how the various solar systems interact with each other.


Each time you conquer the current complexity, your approach adds to it. 
Eventually all that conquering needs to be conquered itself ...

yep.

the world of software is layers upon layers of stuff.
one thing is made, and made easier, at the cost of adding a fair
amount of complexity somewhere else.

this is generally considered a good tradeoff, because the reduction
of complexity in things that are seen is perceptually more important
than the increase in internal complexity in the things not seen.

although it may be possible to reduce complexity, say by finding
ways to do the same things with less total complexity, this will not
actually change the underlying issue (or in other cases may come
with costs worse than internal complexity, such as poor performance
or drastically higher memory use, ...).



 
Paul.





 From: Loup Vaillant l...@loup-vaillant.fr
To: fonc@vpri.org 
Sent: Friday, June 15, 2012 1:54:04 PM
Subject: Re: [fonc] The Web Will Die When OOP Dies
 
Paul Homer wrote:
 It is far more than obvious that OO opened the door
to allow massive
 systems. Theoretically they were possible before,
but it gave us a way
 to manage the complexity of these beasts. Still,
like all technologies,
 it comes with a built-in 'threshold' that imposes a
limit on what we can
 build. If we are too exceed that, then I think we
are in the hunt for
 the next philosophy and as Zed points out the
ramification of finding it
 will cause yet another technological wave to
overtake the last one.

I find that a bit depressing: if each tool that tackle
complexity
better than the previous ones lead us to increase
complexity (just
because we can), we're kinda doomed.

Can't we recognized complexity as a problem, instead of
an unavoidable
law of nature?  Thank goodness we have STEPS project to
shed some light.

Loup.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc





___
fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc 

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] IBM eyes brain-like computing

2011-10-25 Thread Paul Homer
I've always suspected that it comes from the ability to see around corners, 
which appears to be a rare ability. If someone keeps seeing things that other 
people say aren't there, eventually it will drive them a little crazy :-)

An amazing example of this (I think) is contained in this video:

http://www.randsinrepose.com/archives/2011/10/06/you_are_underestimating_the_future.html



Paul.





From: John Zabroski johnzabro...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org
Sent: Tuesday, October 25, 2011 11:55:29 AM
Subject: Re: [fonc] IBM eyes brain-like computing


Brian,

I recommend you pick up a copy of Ray Kurzweil's The Singularity Is Near.  Ray 
is smarter than basically everyone, and although a tad bit crazy (teaching at 
MIT will do that to you :)), he is a legitimate genius.

Basically, before arguing about the limits of computing, read Ray Kurzweil.  
Others have written similar stuff here and there, but nobody is as passionate 
and willing to argue about the subject as Ray.

Cheers,
Z-Bo


On Fri, Oct 14, 2011 at 2:44 PM, BGB cr88...@gmail.com wrote:

On 10/14/2011 9:29 AM, karl ramberg wrote:

Interesting article :
http://www.itnews.com.au/News/276700,ibm-eyes-brain-like-computing.aspx

Not much details, but the what they envisions seems to be more of the
character a autonomic system that can be quarried for answers, not
programmed like today's computers.


I have seen stuff about this several times, with some articles actively 
demeaning and belittling / trivializing the existing pre-programmed Von Veumann 
/ stored-program style machines.


but, one can ask, but why then are there these machines in the first place:
largely it is because the human mind also falls on its face for tasks which 
computers can perform easily, such as performing large amounts of 
calculations (and being readily updated).

also, IBM is exploring some lines of chips (neural-net processors, ...) which 
may well be able to do a few interesting things, but I predict, will fall far 
short of their present claims.


it is likely that the road forwards will not be a one or the other 
scenario, but will likely result in hybrid systems combining the strengths of 
both.

for example, powerful neural-nets would be a nice addition, but I would not 
want to see them at the cost of programmability, ability to copy or install 
software, make backups, ...

better IMO is if the neural nets could essentially exist in-computer as giant 
data-cubes under program control, which can be paused/resumed, or loaded from 
or stored to the HDD, ...

also, programs using neural-nets would still remain as software in the 
traditional sense, and maybe neural-nets would be stored/copied/... as 
ordinary files.

(for example, if a human-like mind could be represented as several TB worth 
of data-files...).


granted, also debatable is how to best represent/process the neural-nets.
IBM is exploring the use of hard-wired logic and crossbar arrays / 
memristors / ...
also implied was that all of the neural state was stored in the chip itself 
in a non-volatile manner, and also (by implication from things read) not 
readily subject to being read/written externally.


my own thoughts had been more along the lines of fine-grained GPUs, where the 
architecture would be vaguely similar to a GPU but probably with lots more 
cores and each likely only being a simple integer unit (or fixed-point), 
probably with some local cache memory.
likely, these units would be specialized some for the task, with common 
calculations/... likely being handled in hardware.

the more cheaper/immediate route would be, of course, to just do it on the 
GPU (lots of GPU power and OpenCL or similar). or maybe creating an 
OpenGL-like library dedicated mostly to running neural nets on the GPU (with 
both built-in neuron types, and maybe also neuronal shaders, sort of like 
fragment shaders or similar). maybe called OpenNNL or something...

although potentially not as powerful (in terms of neurons/watt), I think my 
idea would have an advantage that it would allow more variety in neuron 
behavior, which could likely be necessary for making this sort of thing 
actually work in a practical sense.


however, I think the idea of memristors is also cool, but I would presume 
that their use would more likely be as a type of RAM / NVRAM / SSD-like 
technology, and not in conflict with the existing technology and architecture.


or such...




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Simple, Simplistic, and Scale

2011-07-29 Thread Paul Homer
There is nothing simple about simplification :-)

In '07 I penned a few thoughts about it too:

http://theprogrammersparadox.blogspot.com/2007/12/nature-of-simple.html


Paul.





From: David Barbour dmbarb...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org
Sent: Thursday, July 28, 2011 11:19:47 PM
Subject: [fonc] Simple, Simplistic, and Scale


On Thu, Jul 28, 2011 at 2:16 PM, BGB cr88...@gmail.com wrote:

striving for simplicity can also help, but even simplicity can have costs:
sometimes, simplicity in one place may lead to much higher complexity 
somewhere else. [...]

it is better to try to find a simple way to handle issues, rather than try to 
sweep them under the carpet or try to push them somewhere else.
 
I like to call this the difference between 'simple' and 'simplistic'. It is 
unfortunate that it is so easy to strive for the former and achieve the 
latter. 


* Simple is better than complex.
* Complex is better than complicated.
* Complicated is better than simplistic.


The key is that 'simple' must still capture the essential difficulty and 
complexity of a problem. There really is a limit for 'as simple as possible', 
and if you breach it you get 'simplistic', which shifts uncaptured complexity 
unto each client of the model. 


We can conclude some interesting properties: First, you cannot achieve 
'simplicity' without knowing your problem or requirements very precisely. 
Second, the difference between simple and simplistic only becomes visible for 
a model, framework, or API when it is shared and scaled to multiple clients 
and use cases (this allows you to see repetition of the uncaptured 
complexity). 


I first made these observations in early 2004, and developed a methodological 
approach to achieving simplicity:
(1) Take a set of requirements.
(2) Generate a model that barely covers them, precisely as possible. (This 
will be simplistic.)
(3) Explore the model with multiple use-cases, especially at large scales. 
(Developer stories. Pseudocode.)
(4) Identify repetitions, boiler-plate, any stumbling blocks.
(5) Distill a new set of requirements. (Not monotonic.)
(6) Rinse, wash, repeat until I fail to make discernible progress for a long 
while.
(7) At the end, generate a model that barely overshoots the requirements.


This methodology works on the simple principle: it's easier to recognize 
'simplistic' than 'not quite as simple as possible'. All you need to do is 
scale the problem (in as many dimensions as possible) and simplistic hops 
right out of the picture and slaps you in the face. 


By comparison, unnecessary complexity or power will lurk, invisible to our 
preconceptions and perspectives - sometimes as a glass ceiling, sometimes as 
an eroding force, sometimes as brittleness - but always causing scalability 
issues that don't seem obvious. Most people who live in the model won't even 
see there is a problem, just 'the way things are', just Blub. The only way to 
recognize unnecessary power or complexity is to find a simpler way. 


So, when initially developing a model, it's better to start simplistic and 
work towards simple. When you're done, at the very edges, add just enough 
power, with constrained access, to cover the cases you did not foresee (e.g. 
Turing-complete only at the toplevel, or only with a special object 
capability). After all, complicated but sufficient is better than simplistic 
or insufficient.


I've been repeating this for 7 years now. My model was seeded in 2003 October 
with the question: What would it take to build the cyber-world envisioned in 
Neal Stephenson's Snow Crash? At that time, I had no interest in language 
design, but that quickly changed after distilling some requirements. I took a 
bunch of post-grad courses related to language design, compilers, distributed 
systems, and survivable networking. Of course, my original refinement model 
didn't really account for inspiration on the way. I've since become interested 
in command-and-control and data-fusion, which now has a major influence on my 
model. A requirement discovered in 2010 March led to my current programming 
model, Reactive Demand Programming, which has been further refined: temporal 
semantics were added initially to support precise multimedia synchronization 
in a distributed system, my temporal semantics have been refined to support 
anticipation (which is useful for
 ad-hoc coordination, smooth animation, event detection), and my state model 
was refined twice to support anticipation (via the temporal semantics) and live 
programming. 


I had to develop a methodological approach to simplicity, because the problem 
I so gleefully attacked is much, much bigger than I am. (Still is. Besides 
developing RDP, I've also studied interactive fiction, modular simulations, 
the accessibility issues for blind access to the virtual world, the 
possibility of CSS-like transforms on 3D structures, and so on. I have a 
potentially