(Top posting because Yahoo won't quote HTML email)

Steve,
Some of Google's tech talks on AI are here:
http://www.google.com/search?hl=en&q=google+video+techtalks+ai&btnG=Search

Google has an interest in AI because search is an AI problem, especially if you 
are searching for images or video. Also, their advertising model could use some 
help. I often go to data compression sites where Google is advertising 
compression socks, compression springs, air compressors, etc. I'm sure you've 
seen the problem.

>>Software is not subject to Moore's Law so its cost will eventually dominate.
 >Here I could write a book and more. It could and should obey
Moore's law, but history
>and common practice has gone in other
directions.

Since you have experience writing sophisticated software on very limited 
hardware, perhaps you can enlighten us on how to exponentially reduce the cost 
of software instead of just talking about it. Maybe you can write AGI, or the 
next version of Windows, in one day. You might encounter a few obstacles, e.g.

1. Software testing is not computable (the halting problem reduces to it).

2. The cost of software is O(n log n). This is because you need O(log n) levels 
of abstraction to keep the interconnectivity of the software below the 
threshold of stability to chaos, above which it is not maintainable (where each 
software change introduces more bugs than it fixes). Abstraction levels are 
things like symbolic names, functions, classes, namespaces, libraries, and 
client-server protocols.

3. Increasing the computational power of a computer by n only increases its 
usefulness by log n. Useful algorithms tend to have a power law distribution 
over computational requirements.

>>A
human brain has about 10^9 bits of knowledge, of which probably 10^7 to
10^8 bits are unique to each individual. That makes 10^17 to 10^18 bits
that have to be extracted from human brains and communicated to the
AGI. This could be done in code or formal language, although most of it
will probably be done in natural language once this capability is
developed.

 
>It would be MUCH easier and cheaper to just scan it out with something like a 
>scanning
>UV fluorescent microscope.

No it would not. Assuming we had the technology to copy brains (which we don't 
and you don't), then you have created a machine with human motives. You would 
still have to pay it to work. Do you really think you understand the brain well 
enough to reprogram it to want to work?

>Further, I see the interest in AGIs on this forum as a sort of
religious quest, that is
>absurd to even consider outside of Western
religions

No, it is about the money. The AGIs that actually get built will be the ones 
that can make money for their owners. If an AGI can do anything that a human 
can do, then that would include work. Currently that's worth $66 trillion per 
year.

-- Matt Mahoney, [EMAIL PROTECTED]

--- On Tue, 9/9/08, Steve Richfield <[EMAIL PROTECTED]> wrote:
From: Steve Richfield <[EMAIL PROTECTED]>
Subject: Re: [agi] Re: AI isn't cheap
To: [email protected]
Date: Tuesday, September 9, 2008, 2:10 PM

Matt,


On 9/9/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
--- On Mon, 9/8/08, Steve Richfield <[EMAIL PROTECTED]> wrote:

On 9/7/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:

>>The fact is that thousands of very intelligent people have been trying
>>to solve AI for the last 50 years, and most of them shared your optimism.


>Unfortunately, their positions as students and professors at various
>universities have forced almost all of them into politically correct
>paths, substantially all of which lead nowhere, for otherwise they would

>have succeeded long ago. The few mavericks who aren't stuck in a
>university (like those on this forum) all lack funding.

Google is actively pursuing AI and has money to spend.
 
Maybe I am a couple of years out of date here, but the last time I looked, they 
were narrowly interested in search capabilities and not at all interested in 
linking up fragments from around the Internet, filling in missing metadata, 
problem solving, and the other sorts of things that are in my own area of 
interest. I attempted to interest them in my approaches, but got blown off 
apparently because they thought that my efforts were in a different direction 
than their interests. Have I missed something?


 
If you have seen some of their talks,
 
I haven't. Are any of them available somewhere?


 
you know they are pursuing some basic and novel research.
 
Outside of searching?

 
>>Perhaps it would be more fruitful to estimate the cost of automating the
>>global economy. I explained my estimate of 10^25 bits of memory, 10^26

>>OPS, 10^17 bits of software and 10^15 dollars.

You want to replicate the work currently done by 10^10 human brains. A brain 
has 10^15 synapses. A neuron axon has an information rate of 10 bits per 
second. As I said, you can argue about these numbers but it doesn't matter 
much. An order of magnitude error only changes the time to AGI by a few years 
at the current rate of Moore's Law.


Software is not subject to Moore's Law so its cost will eventually dominate.
 
Here I could write a book and more. It could and should obey Moore's law, but 
history and common practice has gone in other directions. Starting with the 
Bell Labs Interpretive System on the IBM-650 and probably peaking at Remote 
Time Sharing in 1970, methods of bootstrapping to establish a succession of 
higher capabilities to grow exponentially have been known.  Imagine a time 
sharing system with a FORTRAN/ALGOL/BASIC all rolled into one memory-resident 
compiler, significance arithmetic, etc., servicing many of the high schools in 
Seattle (including Lakeside where Bill Gates and Paul Allen learned on it), all 
on the equivalent of a Commodore 64. Some of the customers complained about 
only having 8kB of Huffman-coded macro-instructions to hold their programs, 
until a chess playing program that ran in that 8K that never lost a game 
appeared in the library. Then came the microprocessors and all this has 
been forgotten. Microsoft sought to "do less with
 less" without ever realizing that the really BIG machine they learned on (and 
which they still have yet to equal) was only the equivalent of a Commodore 64. 
I wrote that compiler and chess game.

 
No, the primary limitation is cultural. I have discussed here how to make 
processors that run 10,000 times faster, and how to build a scanning UV 
fluorescent microscope that diagrams brains. The SAME thing blocks both - 
culture. Intel is up against EXACTLY the same mind block that IBM was up 
against when for decades they couldn't move beyond Project Stretch, and there 
simply isn't any area of study into which a Scanning UV fluorescence microscope 
now cleanly falls, of course because without the microscope, such an area of 
study could not develop. Things are now quite stuck until either the culture 
changes (don't hold your breath), or the present generations of "experts" 
(including us) dies off.

 
At present, I don't expect to see any AGIs in our lifetime, though I do believe 
that with support, one could be developed in 10-20 years. Not until someone 
gives the relevant sciences a new name, stops respecting present corporate and 
university structure (e.g. that PhDs have any but negative value), and injects 
~$10^9 to start it can this happen. Of course, this requires independent rather 
than corporate or university money - some rich guy who sees the light. Until I 
meet this guy, I'm sticking to tractable projects like Dr. Eliza.


 
A human brain has about 10^9 bits of knowledge, of which probably 10^7 to 10^8 
bits are unique to each individual. That makes 10^17 to 10^18 bits that have to 
be extracted from human brains and communicated to the AGI. This could be done 
in code or formal language, although most of it will probably be done in 
natural language once this capability is developed.

 
It would be MUCH easier and cheaper to just scan it out with something like a 
scanning UV fluorescent microscope.

 
 Since we don't know which parts of our knowledge is shared, the most practical 
approach is to dump all of it and let the AGI remove the redundancies.

 
This requires an AGI to make an AGI - a problem for the first one.

 
This will require a substantial fraction of each person's life time, so it has 
to be done in non obtrusive ways, such as recording all of your email and 
conversations (which, of course, all the major free services already do).

 
Note as I have been saying, the MOST important metadata does NOT appear 
anywhere in print.

 
The cost estimate of $10^15 comes by estimating the world GDP ($66 trillion per 
year in 2006, increasing 5% annually) from now until we have the hardware to 
support AGI. We have the option to have AGI sooner by paying more. Simple 
economics suggests we will pay up to what it is worth.

 
But, as I have challenged on this forum may times, why is it worth much of 
anything at all? Don't we already have enough GIs running around to not need 
any AGIs? I STILL don't see the general value, though I do concede that if 
there were AGIs, that we would find some interesting jobs for them to do. My 
point is that I just don't see our world as being much better with them than 
without them, certainly not enough better to justify the expense. Is the whole 
idea for us to live in virtual worlds playing virtual games while the AGIs have 
all the fun dealing with the more interesting real world?! Please read The Eden 
Cycle by Gallun, where this is explored in depth. Even simple Dr. Eliza has 
greater real-world economic prospects, and replicating specific people's 
consciousness shows limitless potential worth. Why even bother with AGIs?

 
Further, I see the interest in AGIs on this forum as a sort of religious quest, 
that is absurd to even consider outside of Western religions. People here can't 
seem to see their own shitforbrains programming, yet they want to capture it in 
machines. The real world is painful - and wonderful. If you can't enjoy it as 
it now is, then you probably won't enjoy it any more with AGIs as you mess it 
up for everyone else (Jiri's opinions not withstanding). We NEED our problems. 
I see AGIs more as potential pollution than salvation. You should view the Bill 
Moyers series of interviews with Joseph Campbell if what I am saying here isn't 
completely obvious to you.

Steve Richfield
 




  
    
      
      agi | Archives

 | Modify
 Your Subscription


      
    
  





-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to