[agi] Doubling-time watcher - March 2003.

2003-03-24 Thread Deering
I didn't intend this to become a monthly advertisement for Dell, but if
someone comes up with more bang-for-the-buck (BFTB) from someone else I
would be very interested.

The February 2003 most BFTB system ran $399, this month you have to spend a
little more to get the best deal.


$499 including Free shipping.
Dell Dimension 2350 Series:  Intel Celeron Processor at 1.80GHz
 Memory:   256MB DDR SDRAM
 Keyboard:  Dell Quietkey Keyboard
 Monitor:  New 17 in (16.0 in v.i.s., .27dp) E772 Monitor
 Video Card:  Integrated Intel Extreme 3D Graphics
 Hard Drive:  30GB Value Hard Drive
 Floppy Drive and Additional Storage Devices:  3.5 in Floppy Drive
 Operating System:  Microsoft Windows XP Home Edition
 Mouse:  Dell 2-button scroll mouse
 Network Interface:  Integrated 10/100 Ethernet
 Modem:  56K PCI Data/Fax Modem
 CD or DVD Drive:  48x Max CD-ROM Drive
 Sound Card:  Integrated Audio
 Speakers:  New Harman Kardon HK-206 Speakers
 Bundled Software:  WordPerfect Productivity Pack with Quicken New User
Edition
 Digital Music:  Dell Jukebox powered by MUSICMATCH
 Digital Photography:  Dell Picture Studio Image Expert Standard
 Limited Warranty, Services and Support Options:  1Yr Ltd Warr plus 1Yr
At-Home Service + 90Days Dell SecurityCenter (McAfee)
 Internet Access Services:  6 Months of EarthLink Internet Access
FREE! Lexmark X75 Inkjet Printer

After we have a few more data points we can discuss how best to graph the
power/price function as it applies specifically to the AGI application.

Mike Deering, Director
www.SingularityActionGroup.com

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] doubling time watcher.

2003-02-18 Thread Mike Deering



Unless Ben thinks it would not be appropriate for this 
list, I would like to start a "doubling time" watcher monthly posting of retail 
computer pricesfor purposes of establishing a historical record so that 
questions of doubling time can be grounded in current data.

My choice of category is "most bang for the buck" 
complete system from a major retailer or manufacturer. Usually this will 
be their lowest priced system, as upgrades generally cost more than the 
differential computational value they add. Anyone that would like to post 
a different category, well, you can never have too much data.

My selection for "most bang for the buck" category for 
2/18/03 is:

Dell Dimension 2350 Series
Processor: Celeron 1.7 GHz
Memory: 128 MB
Hard Drive: 60 GB
Monitor: 15 inch
CD: 48 
speed
Floppy drive: Y
Keyboard: Y
Mouse: Y
GraphicsCard: Extreme 3D 
Graphics
OS: Windows XP (HOME)
Speakers: Y
Sound card: Y
Ethernet: Y
Modem: Y
Software: WordPerfect, Quicken.

Price: $399

I might get one of these for my wife so she will stay off 
mine. We are a poor one computer family.

Mike Deering.
www.SingularityActionGroup.com 
---new website.




RE: [agi] doubling time watcher.

2003-02-18 Thread Ben Goertzel




It's 
not totally on-focus for the list, but, a monthly post on the topic certainly 
won't hurt. It will be interesting to see just how cheap computers do 
become over the next couple years! That $399 computer has a faster 
processor than any of my 8 machines, i believe !!

-- 
Ben


  -Original Message-From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]]On Behalf Of Mike 
  DeeringSent: Tuesday, February 18, 2003 12:00 PMTo: 
  [EMAIL PROTECTED]Subject: [agi] "doubling time" 
  watcher.
  Unless Ben thinks it would not be appropriate for this 
  list, I would like to start a "doubling time" watcher monthly posting of 
  retail computer pricesfor purposes of establishing a historical record 
  so that questions of doubling time can be grounded in current 
  data.
  
  My choice of category is "most bang for the buck" 
  complete system from a major retailer or manufacturer. Usually this will 
  be their lowest priced system, as upgrades generally cost more than the 
  differential computational value they add. Anyone that would like to 
  post a different category, well, you can never have too much 
data.
  
  My selection for "most bang for the buck" category for 
  2/18/03 is:
  
  Dell Dimension 2350 Series
  Processor: Celeron 1.7 
GHz
  Memory: 128 MB
  Hard Drive: 60 GB
  Monitor: 15 inch
  CD: 48 
  speed
  Floppy drive: Y
  Keyboard: Y
  Mouse: Y
  GraphicsCard: Extreme 3D 
  Graphics
  OS: Windows XP (HOME)
  Speakers: Y
  Sound card: Y
  Ethernet: Y
  Modem: Y
  Software: WordPerfect, Quicken.
  
  Price: $399
  
  I might get one of these for my wife so she will stay 
  off mine. We are a poor one computer family.
  
  Mike Deering.
  www.SingularityActionGroup.com 
  ---new website.
  
  


Re: [agi] doubling time watcher.

2003-02-18 Thread Stephen Reed
I would like to contribute new SPEC CINT 2000 results as they are posted
to the SPEC benchmark list by semiconductor manufacturers.  I expect
to post perhaps 10 times per year with this news.  This is the source data
for my Human Equivalent Computing spreadsheet and regression line.

If Kurzweil and Mike Deering are right, then the new processor benchmarks
should mostly appear above the existing regression line.  [I hope there is
time to make Cyc -or some other AGI software - safely smart before the
danger of spontaneous emergence arrives.]

-Steve

On Tue, 18 Feb 2003, Mike Deering wrote:

 Unless Ben thinks it would not be appropriate for this list, I would like to start a 
doubling time watcher monthly posting of retail computer prices for purposes of 
establishing a historical record so that questions of doubling time can be grounded 
in current data.

  My choice of category is most bang for the buck complete system from a major 
retailer or manufacturer.  Usually this will be their lowest priced system, as 
upgrades generally cost more than the differential computational value they add.  
Anyone that would like to post a different category, well, you can never have too 
much data.

 My selection for most bang for the buck category for 2/18/03 is:

 Dell Dimension 2350 Series
 Processor:  Celeron  1.7  GHz
 Memory:  128  MB
 Hard Drive:  60  GB
 Monitor:  15  inch
 CD:  48  speed
 Floppy drive:  Y
 Keyboard:   Y
 Mouse:   Y
 Graphics Card:   Extreme 3D Graphics
 OS:   Windows XP (HOME)
 Speakers:  Y
 Sound card:  Y
 Ethernet:  Y
 Modem:  Y
 Software:  WordPerfect, Quicken.

 Price:   $399

 I might get one of these for my wife so she will stay off mine.  We are a poor one 
computer family.

 Mike Deering.
 www.SingularityActionGroup.com---new website.

 ---
 To unsubscribe, change your address, or temporarily deactivate your subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


-- 
===
Stephen L. Reed  phone:  512.342.4036
Cycorp, Suite 100  fax:  512.342.4040
3721 Executive Center Drive  email:  [EMAIL PROTECTED]
Austin, TX 78731   web:  http://www.cyc.com
 download OpenCyc at http://www.opencyc.org
===

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] doubling time watcher.

2003-02-18 Thread Brad Wyble
 
 I would like to contribute new SPEC CINT 2000 results as they are posted
 to the SPEC benchmark list by semiconductor manufacturers.  I expect
 to post perhaps 10 times per year with this news.  This is the source data
 for my Human Equivalent Computing spreadsheet and regression line.

I'm uncomfortable with the phrase Human Equivalent because I think we are very far 
from understanding what that phrase even means.  We don't yet know the relevant 
computational units of brain function.  It's not just spikes, it's not just EEG 
rhythms.  I understand we'll never know for certain, but at the moment, the 
possibility of guesstimating within even an order of magnitude seems premature.

This isn't to say that the regression isn't a bad idea, or irrelevant to AGI design.  
I just don't like the title.  

-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] doubling time watcher.

2003-02-18 Thread Brad Wyble
 
 Brad writes, Might it not be a more accurate measure to chart mobo+CPU com=
 bo prices?
 
 
 Maybe.   If you wanted to research and post this data I'm sure it would be =
 helpful to have.


Check out www.pricewatch.com.   They have a search engine which ranks products by 
vendors.  Using this, you could get lots and lots of data from one source.  By 
averaging mean prices from the top 10 cheapest vendors, you'd wash out wierd one-time 
price break deals that would pollute your data if you only considered the cheapest.

They also have data for complete systems.  


It's also probable that pricewatch keeps archived data of prices.  You might consider 
emailing them.  Finding a smart techie in their NOC who thinks AI is cool and you 
might get your hands on 5+ years of perfect data on every index of computing power.  
CPU, hard drives, tape storage, RAM, everything.

-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] doubling time watcher.

2003-02-18 Thread James Rogers
On Tue, 2003-02-18 at 10:48, Ben Goertzel wrote:
  
 A completely unknown genius at the University of Outer Kirgizia could
 band together with his grad students and create an AGI in 5 years,
 then release it on the shocked world.


Ack!  I thought this was a secret!

Curses, foiled again...


-James Rogers
 [EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] doubling time watcher.

2003-02-18 Thread Brad Wyble
 
 I used the assumptions of Hans Moravec to arrive at Human Equivalent
 Computer processing power:
 
 http://www.frc.ri.cmu.edu/~hpm/
 
 Of course as we get closer to AGI then the error delta becomes smaller.  I
 am comfortable with the name for now and will adjust the metric as more
 info becomes available.
 

The error delta depends more on neuroscience research than AGI progress.  I'm not 
comfortable with Moravec's calculations, but his approach of estimating based on 
retinal processing power is better than anything else I've read on it.  Retinal 
neurons aren't quite the same beasts as the enormous pyramidal's that make up much of 
the brain though. 

 
  This isn't to say that the regression isn't a bad idea, or irrelevant to AGI 
design.  I just don't like the title.
 
  -Brad


Oops, I meant to that This isn't to say that the regression *is* a bad idea.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Mike Deering



Billy, I agree that AGI is a complicated architecture of 
hundreds of separarate software solutions. But all of these solutions have 
utility in other software environments and progress is being made by tens of 
thousands of programmers each working on improving some little software function 
for some other purpose that they have no idea will someday be used in AGI. 
There is nothing truly unique about the functional building blocks of AGI, just 
the overall architecture.


Having gone way out on a limb here, all you AGI experts 
can now start sawing.


Mike Deering.
www.SingularityActionGroup.com 
---new website.


RE: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Ben Goertzel




Yeah, 
I don't think your statement is true...

And 
I'm a huge advocate of the "integrative" approach. My feeling is 
that maybe half of the ingredients of an AGI are things that were created 
for other (usually narrow AI) purposes and can be used, not "off the shelf", but 
with only moderate rather than severe modifications. The other half are 
things that are certainly *related* to known science, but are unique to AGI, 
without asignificant use as"standalone" systems.

For 
example, overall regulation of system attention (focus) is a big part of any AGI 
system, and I don't think any non-AGI algorithms are ever going to be helpful 
for solving the problem in an AGI context. (Novamente has its own way of 
doing this, which does not draw on any narrow-AI or general-Cs methods except 
loosely).

I note 
that at least one famous AI guy -- Danny Hillis -- agrees with you though. 
That's part of the reason he gave up on AI work. He figures AGI is "just a 
lot of little things" and he's working on some of the little things now, not on 
the big picture. Of course, this philosophy is probably why his company 
Thinking Machines didn't have a coordinated AI research program, instead it 
worked on a lot of different things using a common hardware 
architecture...

-- Ben 
G

  -Original Message-From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]]On Behalf Of Mike 
  DeeringSent: Tuesday, February 18, 2003 3:20 PMTo: 
  [EMAIL PROTECTED]Subject: Re: AGI Complexity (WAS: RE: [agi] 
  "doubling time" watcher.)
  Billy, I agree that AGI is a complicated architecture of 
  hundreds of separarate software solutions. But all of these solutions 
  have utility in other software environments and progress is being made by tens 
  of thousands of programmers each working on improving some little software 
  function for some other purpose that they have no idea will someday be used in 
  AGI. There is nothing truly unique about the functional building blocks 
  of AGI, just the overall architecture.
  
  
  Having gone way out on a limb here, all you AGI experts 
  can now start sawing.
  
   
  Mike Deering.
  www.SingularityActionGroup.com 
  ---new website.


Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Alan Grimes
 From recent comments here I can see there are still a lot of people out
 there who think that building an AGI is a relatively modest-size 
 project, and the key to success is simply uncovering some new insight 
 or technique that has been overlooked thus far.

I would agree with that though the key is not so much insight as the
term is commonly used but rather a willingness to accept the ugly truths
of human intelligence...

 IMHO this is partly a matter of necessary optimism (i.e. we can only 
 afford a 4-man-year project, so let's hope that will be enough),

There was a fair ammount of that, especially when hardware was even
tinier than it is today. =P 

 and partly a sort of bleedover from the view of human minds that 
 dominated the social sciences for most of the 20th century (i.e. 
 infants are a blank slate, and blank slates sound pretty simple, so a 
 newly-written AGI must be a relatively simple program).

In some ways that is the best perspective (in contrast with Cyc which
attempts to engrave everything first...) 

But you are right, that a topologicly flat blank slate won't work
either...

 complex adaptive behavior requires a complex, specialized 
 implementation. Always. No exceptions, no free lunches, no magic 
 connectoplasmic shortcuts.

The brain is actually fantasticly simple... 

It is nothing compared with the core of a linux operating system
(kernel+glibc+gcc). 

Heck, even the underlying PC hardware is more complex in a number of
ways than the brain, it seems... 

The brain is very RISCy... using a relatively simple processing pattern
and then repeating it millions of times. 

So while an adult brain has a few billion neurons, the program which
produces it is only a few megabytes in size... (The entire genome is
about 750 mb, most of which is beleived to be either inactive or there
purely for structural reasons).

 We know from the biology folks that the human mind contains at least 
 dozens, and probably hundreds of specialized subsystems.

In the cortex, I would propose the number is 28 for the left hemisphere,
and maybe another 10 or so in the right hemisphere which don't directly
overlap with the ones on the left.

The sense of smell is strange, but the vision and motor reflexes only
constitute maybe two dozen instances of maybe 5 or so distinct design
patterns. (We only need to worry about the design patterns). 

I do agree that a early AI praject should try to replicate as much of
the functionality found in the brain as possible. I will be proposing an
architecture along these lines in a few months... 

 The ones that computer scientists have tried to replicate, like vision 
 and hearing, have turned out to contain massive amounts of complexity - 
 computer vision alone is apparently the kind of problem that takes a 
 good, well-funded team several decades to solve.

Consider the chess problem. 
The present computer Chess solutions are widely acknowleged to be much
less efficient than the ones in the brain. So the complexity that you
are trying to argue is necessary for AGI is merely reflective of our
currently poor programming methodologies.

 What this means for AI research is that any serious attempt to create 
 an AGI by duplicating the way human minds work would be a massive 
 effort, at least one and probably two orders of magnitude larger than 
 any software development effort ever attempted.

I would say that it would require maybe a dozen highly gifted devels
with maybe 20 code-grunts for the support framework.

 That makes it much too big for current software engineering methods, so 
 the effort would almost certainly fail.

Don't implement the mind, implement the brain! =P 
 

-- 
I WANT A DEC ALPHA!!! =)
21364: THE UNDISPUTED GOD OF ALL CPUS.
http://users.rcn.com/alangrimes/
[if rcn.com doesn't work, try erols.com ]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Ben Goertzel

I agree with your qualitative point that a computationally efficient
intelligence has got to consist of a combination of specialized systems
(operating tightly coupled togetherin a common framework, and with many
commonalities and overlaps).

However, I don't agree with your quantitative estimate that an AGI has to be
orders of magnitude bigger than any software project ever attempted.

I agree that many people underestimate the problem, but I think you
overestimate the problem.  And mis-estimate it.  I think you overestimate
the bulk of the problem and underestimate the subtlety of finding the right
framework and the right algorithms.

The brain is a hugely complex tangled mess of structures and processes, but
that doesn't mean that an AGI has to be.  AGI does not mean brain emulation.
Legs are vastly more complex than wheels, yet wheels are good at moving
around too.  (And wheels can't help you invent artificial legs, whereas a
nonhuman AGI can potentially help you figure out how to make a more human
AGI if you want to).

You  mention the vast amount of work that's gone into computer vision and
audition.  That is true, but I think that those disciplines would be a lot
more tractable if they were carried out together with AGI cognition, rather
than separately.  Pursuing them standalone may make them harder in many
ways, rather than easier.

My guess, not surprisingly, is that the Novamente design is close to the
minimal level of complexity needed ;)  Dozens of node and link types, a few
dozen mental processes, and a couple dozen functionally-specialized units
combining node and link types and processes in appropriate ways.  This is a
lot more complexity than the typical AI program but a lot less complexity
than you seem to be alluding to.

But of course, none of us *really know*.  Eliezer Yudkowsky in the past has
partially agreed with you, in that he's proposed the Novamente design is
significantly too simple.

-- Ben



 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
 Behalf Of Billy Brown
 Sent: Tuesday, February 18, 2003 2:54 PM
 To: [EMAIL PROTECTED]
 Subject: AGI Complexity (WAS: RE: [agi] doubling time watcher.)


 From recent comments here I can see there are still a lot of people out
 there who think that building an AGI is a relatively modest-size project,
 and the key to success is simply uncovering some new insight or technique
 that has been overlooked thus far. IMHO this is partly a matter
 of necessary
 optimism (i.e. we can only afford a 4-man-year project, so let's
 hope that
 will be enough), and partly a sort of bleedover from the view of human
 minds that dominated the social sciences for most of the 20th
 century (i.e.
 infants are a blank slate, and blank slates sound pretty simple, so a
 newly-written AGI must be a relatively simple program). Unfortunately for
 AI optimists, all the evidence points in the opposite direction.

 If we have learned nothing else about the nature of Mind in the last 50
 years, we should at least have learned this: complex adaptive behavior
 requires a complex, specialized implementation. Always. No exceptions, no
 free lunches, no magic connectoplasmic shortcuts.

 We know from the biology folks that the human mind contains at
 least dozens,
 and probably hundreds of specialized subsystems. The ones that computer
 scientists have tried to replicate, like vision and hearing, have
 turned out
 to contain massive amounts of complexity - computer vision alone is
 apparently the kind of problem that takes a good, well-funded team several
 decades to solve.

 Now, it may be that some particular subsystems can be omitted from an AGI
 that isn't intended to be very humanlike. An AGI with no body may
 not need a
 kinesthetic sense or motor skills, an AGI without cameras may not need
 vision, and so on. But anyone who thinks there is some tiny
 kernel of pure
 thought in there waiting to be duplicated, and all the rest can be safely
 ignored, is just kidding themselves. Every part of the mind that
 we have any
 understanding of at all has turned out to be a tangle of complex
 algorithms
 interacting in very complex ways. There is no reason to believe
 the parts we
 don't understand are any different.

 What this means for AI research is that any serious attempt to
 create an AGI
 by duplicating the way human minds work would be a massive
 effort, at least
 one and probably two orders of magnitude larger than any software
 development effort ever attempted. That makes it much too big for current
 software engineering methods, so the effort would almost certainly fail.

 For projects that intend to implement a completely novel design, the
 implication is that you can't realistically expect anything like
 human-equivalent performance on unrestricted tasks. Evolution
 wouldn't have
 given us the equivalent of hundreds of millions of lines of specialized
 software if there were some easy shortcut waiting to be found.
 So, if you're

Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Eliezer S. Yudkowsky
Ben Goertzel wrote:


But of course, none of us *really know*.


Technically, I believe you mean that you *think* none of us really know, 
but you don't *know* that none of us really know.  To *know* that none of 
us really know, you would have to really know.

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


RE: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Billy Brown
Ben Goertzel wrote:
 And I'm a huge advocate of the integrative approach.
 My feeling is that  maybe half of the ingredients of
 an AGI are things that were created for other (usually
 narrow AI) purposes and can be used, not off the shelf,
 but with only moderate rather than severe modifications.
 The other half are things that are certainly *related*
 to known science, but are unique to AGI, without a
 significant use as standalone systems.

For the most part I agree with you (at least, until narrow-AI technology
becomes more common in commercial apps). I do, however, think that a lot of
people take this as an excuse to just write everything themselves. In
particular, I've noticed people tend to invent their own network protocols,
database systems, and other basic building blocks despite the fact that in
most cases they would be better off just buying a commercial product. IMHO
this is a complete waste of effort - an AI team should spend as much of its
time as possible solving AI problems, not trying to optimize their file IO.

Billy Brown

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Brad Wyble
 The brain is actually fantasticly simple... 
 
 It is nothing compared with the core of a linux operating system
 (kernel+glibc+gcc). 
 
 Heck, even the underlying PC hardware is more complex in a number of
 ways than the brain, it seems... 
 
 The brain is very RISCy... using a relatively simple processing pattern
 and then repeating it millions of times. 

Alan, I strongly suggest you increase your familiarity with neuroscience before making 
such claims in the future.  I'm not sure what simplified model of the neuron you are 
using, but be assured that there are many layers of complexity of function within even 
a simple neuron, let alone in networks.  The coupled resistor/capacitor model is only 
given as a simplified version in textbooks to make the topic of neural networks 
digestible to the entry-level student.  Dendrites are not simple summators, they have 
a variety of nonlinear processes including recursive, catalytic chemical reactions and 
complex second-messenger systems.  That's just the tip of the iceberg once you get 
into pharmacological subsystems, the complexity becomes a bit staggering. 

If it were fanastically simple, more so than a Linux box, do you think that thousands 
of scientists working over more than one hundred years would still understand it so 
poorly, yet it takes a group of 5 people 2 years to crank out a new Linux OS?
   
 
  We know from the biology folks that the human mind contains at least 
  dozens, and probably hundreds of specialized subsystems.
 
 In the cortex, I would propose the number is 28 for the left hemisphere,
 and maybe another 10 or so in the right hemisphere which don't directly
 overlap with the ones on the left.

You realize that the blobs drawn on images of the brain in college level textbooks are 
simply areas of cell responsivity, and not diagrams of the systems themselves?  The 
cortex is highly differentiated containing probably dozens if not hundreds of systems, 
not to mention the enormous variety of specialized systems at the subcortical level.   
The complex soup of the reticular formation is sufficient to turn a sane anatomist 
into a sobbing wreck with its dozens of specific nerve clusters.


 
 Consider the chess problem. 
 The present computer Chess solutions are widely acknowleged to be much
 less efficient than the ones in the brain. So the complexity that you
 are trying to argue is necessary for AGI is merely reflective of our
 currently poor programming methodologies.

Chess is a game designed by the mind, so it is no surprise that it is something the 
mind is good at.  It is trivial to design games that computers are vastly superior at, 
but that does not mean the mind has poor programming methodologies.



_Brad





---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] doubling time watcher.

2003-02-18 Thread Eliezer S. Yudkowsky
Brad Wyble wrote:


I'm uncomfortable with the phrase Human Equivalent because I think we
are very far from understanding what that phrase even means.  We don't
yet know the relevant computational units of brain function.  It's
not just spikes, it's not just EEG rhythms.  I understand we'll never
know for certain, but at the moment, the possibility of guesstimating
within even an order of magnitude seems premature.


See also Human-level software crossover date from the human crossover 
metathread on SL4:

http://sl4.org/archive/0104/1057.html

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


RE: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Ben Goertzel

Well, we invented our own specialized database system (in effect) but not
our own network protocol.

In each case, it's a tough decision whether to reuse or reimplement.  The
right choice always comes down to the nasty little details...

The biggest Ai waste of time has probably been implementing new programming
languages, thinking that if you just had the right language, coding the AI
would be SO much easier.  Ummm...

Ben

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
 Behalf Of Billy Brown
 Sent: Tuesday, February 18, 2003 4:13 PM
 To: [EMAIL PROTECTED]
 Subject: RE: AGI Complexity (WAS: RE: [agi] doubling time watcher.)


 Ben Goertzel wrote:
  And I'm a huge advocate of the integrative approach.
  My feeling is that  maybe half of the ingredients of
  an AGI are things that were created for other (usually
  narrow AI) purposes and can be used, not off the shelf,
  but with only moderate rather than severe modifications.
  The other half are things that are certainly *related*
  to known science, but are unique to AGI, without a
  significant use as standalone systems.

 For the most part I agree with you (at least, until narrow-AI technology
 becomes more common in commercial apps). I do, however, think
 that a lot of
 people take this as an excuse to just write everything themselves. In
 particular, I've noticed people tend to invent their own network
 protocols,
 database systems, and other basic building blocks despite the fact that in
 most cases they would be better off just buying a commercial product. IMHO
 this is a complete waste of effort - an AI team should spend as
 much of its
 time as possible solving AI problems, not trying to optimize
 their file IO.

 Billy Brown

 ---
 To unsubscribe, change your address, or temporarily deactivate
 your subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Brad Wyble

 
 
 Well, we invented our own specialized database system (in effect) but not
 our own network protocol.
 
 In each case, it's a tough decision whether to reuse or reimplement.  The
 right choice always comes down to the nasty little details...
 
 The biggest Ai waste of time has probably been implementing new programming
 languages, thinking that if you just had the right language, coding the AI
 would be SO much easier.  Ummm...
 

The thing that gives me the most confidence in you Ben is that you made it to round 2 
and you're still swinging.  You've personally learned the hard lessons of AGI design 
and its pitfalls that most of the rest of us can only imagine by analogy.  

-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Alan Grimes
Brad Wyble wrote:
  Heck, even the underlying PC hardware is more complex in a number of
  ways than the brain, it seems...

  The brain is very RISCy... using a relatively simple processing 
 pattern and then repeating it millions of times.


 Alan, I strongly suggest you increase your familiarity with 
 neuroscience before making such claims in the future.  I'm not sure 
 what simplified model of the neuron you are using, but be assured that 
 there are many layers of complexity of function within even a simple 
 neuron, let alone in networks. 

I havn't looked at the neuron in quite a while. =P
But I don't consider myself [completely] insane in this context either.

 Dendrites are not simple summators, they have a variety of nonlinear 
 processes including recursive, catalytic chemical reactions and complex 
 econd-messenger systems.  That's just the tip of the iceberg once you 
 get into pharmacological subsystems, the complexity becomes a bit 
 staggering.

Yeah, the dendrite _trees_ are quite complex. My interest, however, lies
in the *forest*. ;) 

So the question is: what program is necessary to generate a system with
the same computational charactoristics as the brain? (completely
ignoring the implementation details, most of which are irrelevant or
artifacts of the general implementation strategy). 

My current understanding draws heavily on the Cerebral Code by William
H. Calvin (assuming I don't have to go all the way over to the shelf to
check the name). Calvin proposes what ammounts to a sophisticated,
optomized Celular Automata. 

I'll go ahead and sketch it out here: 

Start with Conway's game of life...

Notice that it is rather slow because of its topology, if it were more
strongly connected signals could travel faster and more efficiently... 
To solve this we add a second layer of topology in the form of shortcuts
between the varrious regions and hence we have the subcortical
pathways...

Now that our system is roughly brain-shaped we consider the cells
individually. Conway proposed a computationally universal model which
possessed only one bit of state. This system would require large numbers
of cells to express concepts such as degree of magnitude and other
similarly important facets. It also has no inherant distinction between
situational awareness and long-term skill and memory systems making it
vulnerable to computer viruses and generally too dynamic to support
stable long-term behavior patterns. 

We solve the first problem by increasing the ammount of state the thing
can carry... From a single bit we now have a vector of some unknown
length (probably 10-12 8-bit words or less) that expresses the current
pattern under study. This actually reduces the total complexity of the
system drasticly. 

This system is still too dynamic, we want to ground it in a more stable
system. We create two classes of state, a persistant structural state
and a dynamic state that expresses the present activation of the
persistant state. In almost all higher animals, a sleep period is
required to clear the chaotic dynamic state of the matrix and
re-initialize it from the persistant state. The reset process occours
during delta wave sleep and the re-init process occours during beta wave
sleep. Also during this time, the almost totally unbiased computational
matrix which is the cortex is programmed through a program running on a
small subset of the cortex loaded from what is essentially a ROM being
the Amigdalya and hypothalamus as well as certain structures in the
reticular formation.

The neocortex, as far as I know, is fairly uniform in general algorithm.
We only need to wire it up slightly differently for each region. I
don't know wheather this applies to the older cortical regions such as
the hypocampus as well. I do know that the latter structures use a
different and moderately less complex algorithm... 

 If it were fanastically simple, more so than a Linux box, do you think 
 that thousands of scientists working over more than one hundred years 
 would still understand it so poorly, yet it takes a group of 5 people 2 
 years to crank out a new Linux OS?

That's not a proof at all. The evident fact that nobody has yet tried
the right approach has no relationship to the nature of that correct
approach.

  In the cortex, I would propose the number is 28 for the left 
 hemisphere, and maybe another 10 or so in the right hemisphere which 
 don't directly overlap with the ones on the left.

 You realize that the blobs drawn on images of the brain in college 
 level textbooks are simply areas of cell responsivity, and not diagrams 
 of the systems themselves?

[/me feels a sudden intense wave of frustration.]

MY LACK OF KNOWLEGE OF ANY SUCH SYSTEM IS A DIRECT RESULT OF THE
DEFICIENCIES OF SAID COLEGE TEXTBOOKS. =

I'm 100% self taught at this point. =\

 The cortex is highly differentiated containing probably dozens if not 
 hundreds of systems, not to mention the enormous variety of specialized 
 

RE: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Billy Brown
Ben Goertzel wrote:
 However, I don't agree with your quantitative estimate that an AGI has to
be
 orders of magnitude bigger than any software project ever attempted.

 I agree that many people underestimate the problem, but I think you
 overestimate the problem.  And mis-estimate it.  I think you overestimate
 the bulk of the problem and underestimate the subtlety of finding
 the right framework and the right algorithms.

 The brain is a hugely complex tangled mess of structures and processes,
but
 that doesn't mean that an AGI has to be.  AGI does not mean brain
emulation.
 Legs are vastly more complex than wheels, yet wheels are good at moving
 around too.  (And wheels can't help you invent artificial legs, whereas a
 nonhuman AGI can potentially help you figure out how to make a more human
 AGI if you want to).

That isn't as close an analogy as it seems. A leg must do many things that
wheels don't - grow, heal, resist microorganisms, raise and lower the body,
cross a wide variety of rough terrain, etc. If we tried to build a machine
with all of the same capabilities, it is not at all clear that it would be
simpler.

The brain does have a few tasks an AGI doesn't have to worry about, like
metabolism and immune response. But these complexities are mostly down at
the cellular level, and I wasn't arguing that an AGI has to duplicate such
things. The biggest simplification I see that is relevant here is the fact
that the brain must self-organize to a large extent, while an AGI could be
coded in its final configuration. But AI projects usually expect most of the
complexity of the final system to emerge through some kind of training
process, which means you're tackling exactly the same problem.

That leaves two popular options that I don't think will work out:

1) You can leave out huge chunks of functionality in the hope that they
aren't needed for intelligence. This might work, but it isn't nearly as safe
as it might seem. Our human version of general intelligence seems to rely
heavily on drafting big specialized systems (like visualization and
language) for use in new domains whose problems happen to have analogous
regularities. Without a lot more knowledge than anyone currently has about
how intelligence works, it seems likely that you'll omit something you can't
get by without.

2) You can ignore all the messy stuff devoted to dealing with the physical
world, like sensory processing and motor control, and concentrate solely on
implementing abstract thought. That sounds promising, except that its
exactly what most AI project have been doing for 50 years and the progress
to date has been underwhelming. Besides which, that only cuts out something
like 40% - 80% of the brain (depending on where you draw the line), which
would still leave you with a gigantic project implementing the features you
decided to keep.

Do you see another option for simplification?

 You  mention the vast amount of work that's gone into computer vision and
 audition.  That is true, but I think that those disciplines would be a lot
 more tractable if they were carried out together with AGI cognition,
rather
 than separately.  Pursuing them standalone may make them harder in many
 ways, rather than easier.

Maybe. Maybe not. To be honest, I think most people in this field have a bad
habit of using general intelligence as a magic wand to gloss over hard
problems that are going to require specialized mechanisms no matter how
smart the overall system is.

For example, in the case of computer vision, just getting from a 2D array of
pixels to a possible set of object geometries takes a heck of a lot of work,
and it has to be done by fast, dumb code for performance reasons. After that
you have to recognize objects (a narrow problem), build a useful world-model
(another narrow problem), detect and fix visual illusions and other data
corruptions (yet another narrow problem), and so on. Once you have all these
mechanisms you might be able to improve the results a bit by having the AI
think about the output (Hmm, no, I'm sure that can't really be Santa Clause
on that rooftop. It must be a Christmas display.). But you can't avoid
building the specialized mechanisms in the first place.

 My guess, not surprisingly, is that the Novamente design is close to the
 minimal level of complexity needed ;)

Well, of course. Otherwise you wouldn't be building it.  :)

But I do think there would be a lot more progress in AI if more people were
building systems designed merely to solve the next obvious obstacle on the
path to AGI, or to provide a platform for future work. What we have now is
like a football team where the quarterback won't throw a pass unless the
receiver is standing next to the goal post. Lots of long shots, little
progress.

OTOH, at least Novamente has enough internal complexity to reach territory
that hasn't already been explored by classical AI research. I don't expect
it to wake up, but I expect it will be a lot more productive than 

Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Jonathan Standley

 Alan, I strongly suggest you increase your familiarity with neuroscience
before making such claims in the future.  I'm not sure what simplified model
of the neuron you are using, but be assured that there are many layers of
complexity of function within even a simple neuron, let alone in networks.
The coupled resistor/capacitor model is only given as a simplified version
in textbooks to make the topic of neural networks digestible to the
entry-level student.  Dendrites are not simple summators, they have a
variety of nonlinear processes including recursive, catalytic chemical
reactions and complex second-messenger systems.  That's just the tip of the
iceberg once you get into pharmacological subsystems, the complexity becomes
a bit staggering.


agreed that the brain is enormously complex; however I think the point Alan
was making hinges on a slightly different interpretation of the word
complexity.

His interpretation seems to be similar to that which Hofstadter elucidates
in GEB; namely the idea of 'sealing off' of levels.  You can look at the
mind through different perspectives and at varying scales because of it's
high complexity.  Yet this very trait, arising from the brain's
mind-boggling complexity, allows one to model it at a system-scale level.
At a high enough level, you can start treating various major components as
black boxes, and dealing only with their high functionality.  Of course you
lose a certain amount of accuracy in doing this, but it is nonetheless a
valid approach.  We view and deal with other people as unified personalities
who we cannot 'read their mind'.  Rather we observe their actions and draw
conclusions about internal states that cannot be directly observed in the
absence of sophisticated brain-scanning technology.  Despite this
limitation, we are able to interact with others and predict their future
behavior and mental states to a reasonable degree.

Say I'm designing an AGI architecture (which I am btw, but it is irrelevant
to this discussion :)  and I want to preprocess audio data so that speech is
already parsed by the time it enters the AI's cognitive modules.  All I need
to do is obtain a preexisting natural language parser program and then
tailor the AI cognitive module(s) to work w/ it's output instead of raw
audio data.  I don't need to even look at the parsers' code if I don't want
to. (Although it may ease the use of it if I do examine it, it;s not
necessary)

I suppose I'm saying you can approach the mind (or any complex system that
has at least vaguely recognizable functional subsystems) in a manner
analogous to that of Object Oriented Programming

Jonathan Standley

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Alan Grimes
 OTOH, at least Novamente has enough internal complexity to reach 
 territory that hasn't already been explored by classical AI research. I 
 don't expect it to wake up, but I expect it will be a lot more 
 productive than those One True Simple Formula For Intelligence-type 
 projects.

Yes and no. 

You are right that an AI entity will be a complex system with a great
deal of pre-programmed structure. 

You are mostly wrong, however, in denouncing the one true formula
approach because there really is only one obvious problem standing in
our way to general intelligence and that's the friggin translation
problem... 

Its friggin cuz Hofstadter came up with it some 20 years ago and the
solution has not yet been discovered...

After the translation problem is solved, creating AI entities should be
so simple that even an un-modified human 8-year old could do it if
provided with a toolbox of drag-and-drop components... (Actually
creating the components would require at least a HS level compsci
course)... 

-- 
I WANT A DEC ALPHA!!! =)
21364: THE UNDISPUTED GOD OF ALL CPUS.
http://users.rcn.com/alangrimes/
[if rcn.com doesn't work, try erols.com ]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Ben Goertzel

 Do you see another option for simplification?

I am not starting from a foundational concept of brain emulation, so I'm
not really faced with the problem of simplifying the brain.

 Maybe. Maybe not. To be honest, I think most people in this field
 have a bad
 habit of using general intelligence as a magic wand to gloss over hard
 problems that are going to require specialized mechanisms no matter how
 smart the overall system is.

I like to distinguish two kinds of specialized mechanisms:

1) those that are autonomous

2) those that build specialized functionality on a foundation of
general-intelligence-oriented structures and dynamics

The AI field, so far, has focused mainly on Type 1.  But I think Type 2 is
more important.

 For example, in the case of computer vision, just getting from a
 2D array of
 pixels to a possible set of object geometries takes a heck of a
 lot of work,
 and it has to be done by fast, dumb code for performance reasons.
 After that
 you have to recognize objects (a narrow problem), build a useful
 world-model
 (another narrow problem), detect and fix visual illusions and other data
 corruptions (yet another narrow problem), and so on. Once you
 have all these
 mechanisms you might be able to improve the results a bit by having the AI
 think about the output (Hmm, no, I'm sure that can't really be
 Santa Clause
 on that rooftop. It must be a Christmas display.). But you can't avoid
 building the specialized mechanisms in the first place.

I think the general intelligence mechanisms for vision occurs at a much
lower level than your example suggests.

I think that object recognition and world-model-building, for example, use
Type 2 specialization, not Type 1

I agree that edge detection, for example, is pure Type 1 specialization

 But I do think there would be a lot more progress in AI if more
 people were
 building systems designed merely to solve the next obvious obstacle on the
 path to AGI, or to provide a platform for future work.

I think that is what the bulk of academic AI researchers are doing.  The
folks on this list who are actively working on AI tend to be exceptions,
with more ambitious goals.

 What we have now is
 like a football team where the quarterback won't throw a pass unless the
 receiver is standing next to the goal post. Lots of long shots, little
 progress.

Again, the contemporary mainstream AI field is really very conservative,
concerned entirely with taking small steps in a risk-averse way.

 OTOH, at least Novamente has enough internal complexity to reach territory
 that hasn't already been explored by classical AI research. I don't expect
 it to wake up, but I expect it will be a lot more productive than those
 One True Simple Formula For Intelligence-type projects.

Well I certainly hope Novamente will be more productive than that type of
projec ;)  However, the type of project you cite is more characteristic of
AI of the 60's and 70's than of modern mainstream AI.

Nearly all contemporary AI researchers are not actively seeking AGI at all;
by and large, they think it's hundreds of years off, and are working on
highly specialized algorithms attacking subproblems of intelligence.  Which
seems to be exactly what you think they should be doing!

-- Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Ben Goertzel

 The thing that gives me the most confidence in you Ben is that 
 you made it to round 2 and you're still swinging.  You've 
 personally learned the hard lessons of AGI design 

Well, some of them ;)  I'm sure there are plenty of hard lessons ahead!!  

-- ben


 and its 
 pitfalls that most of the rest of us can only imagine by analogy.  
 
 -Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Ed Heflin
 Say I'm designing an AGI architecture (which I am btw, but it is
irrelevant
 to this discussion :)  and I want to preprocess audio data so that speech
is
 already parsed by the time it enters the AI's cognitive modules.  All I
need
 to do is obtain a preexisting natural language parser program and then
 tailor the AI cognitive module(s) to work w/ it's output instead of raw
 audio data.  I don't need to even look at the parsers' code if I don't
want
 to. (Although it may ease the use of it if I do examine it, it;s not
 necessary)

From the MS Speech Development Kit genre, I believe some of the early SAPI
versions, i.e. = 4.0, did some limited amount of syntactical natural
language parsing along with the speech recognition.  It's been some time
since I looked at this, but I believe my conclusion was that it wasn't all
that reliable, i.e. low % accuracy for correct POS identification?, etc.  I
don't know if this gets you where you want to go, but it might be worth
looking at.

BTB, it seems a better, more forward looking approach to your architecture
might be to implement audio parsing (AP - or speech recognition SR?),
natural language parsing (NLP) and cognitive processing (CP) or cognition as
a coherent whole, not the other way around with separate and distinct audio
parsing (AP), natural language parsing (NLP), and cognitive processing (CP)
modules...as you suggest with your comments about an OO approach.

In addition to the tremendous benefits of architecting something closer to
real AGI, i.e. an obvious increase in the 'Goertzelian Real-AGI' level ;-),
you would have the benefits of computational optimization, specifically,
reduced # of ops to cognition, reduced object I/O, reduced latency, reduced
processing redundancy, etc. assuming, of course, your implementation of the
cognitive processing (CP) doesn't incur a tremendous overhead from the
synthesis with the other two modules.


 I suppose I'm saying you can approach the mind (or any complex system that
 has at least vaguely recognizable functional subsystems) in a manner
 analogous to that of Object Oriented Programming

Ibid.

Just my $0.02 worth.  EGHeflin


 Jonathan Standley

 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Bill Hibbard
On Tue, 18 Feb 2003, Brad Wyble wrote:

 . . .
 Incorrect.  The cortex has genetically pre-programmed systems.
 It cannot be said that is a matrix loaded with software from
 subcortical structures..
 . . .

Yes, but there is a very interesting experiment with rewiring
brains of young ferrets so that visual signals from their
retinas do not connect to their visual cortex area V1, but
instead connect to their auditory cortex. The same banded
structure of cells specialized for different line direction
orientations that normally develop in the visual cortex develop
in the auditory cortex. This suggests that these structures
are not encoded in ferret genes, but rather are learned in
response to the structure of visual stimuli.

The reference is:
  Sharma, J., Angelucci, A., Sur. M. Induction of visual
  orientation modules in auditory cortex. Nature 404, 841-847.
  2000.

I don't pretend to know anywhere near what you do about
neuroscience, but thought you might find this interesting.

Cheers,
Bill

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Alan Grimes
[META: please turn line-wrap on, for each of these responses my own
standards for outgoing mail necessitate that I go through each line and
ensure all quotations are properly formatted...]

Brad Wyble wrote:

 The situation for understanding a single neuron is somewhat disastrous. 
...
 I'm just trying to give you a taste of the sophistications that are 
 relevant to brain function and cannot be glossed over.

Iff the brain is not unique in its capability to support intelligence
then all of this can be replaced by some abstract model with the same
basic computational charactaristics but in a very different way. 


  So the question is: what program is necessary to generate a system 
 with the same computational charactoristics as the brain? (completely
  ignoring the implementation details, most of which are irrelevant or
  artifacts of the general implementation strategy).

 The implementation details are what tells you how the brain 
 functions.

I don't care _HOW_ it functions, I care about _WHAT_ a given section
accomplishes through its functioning. 

Given that, it should be relatively streight forward to find a
work-alike 

Failing that, it is still possible to set up a system akin to Creatures
but with a much more powerful engine and wait untill a good'nuff
algorithm evolves on its own... 

This is, infact, my basic plan at this juncture. =)

 We don't know the computational characteristics yet because 
 they are so extraordinarily complex.  We don't yet completely 
 understand how a *single synapse* functions. 

You're squinting too hard.


  My current understanding draws heavily on the Cerebral Code by 
 William H. Calvin (assuming I don't have to go all the way over to the 
 shelf to
  check the name). Calvin proposes what ammounts to a sophisticated,
  optomized Celular Automata.

 He's a fine author of pop neuroscience, but in order to be accessible 
 he necessarily glosses over many layers of complexity.  It is a mistake 
 to take his simplified representations at face value.  He needs to 
 simplify to get his good ideas across.  Use the ideas, but don't 
 extrapolate brain functions from his simplistic depictions.

rant mode engaged
I HATE IVORYTOWERISM!!!
IF A BOOK DOESN'T TELL IT LIKE IT IS, IT SHOULD NEVER BE PUBLISHED, EVEN
TO LITTLE CHILDREN!! (Especially not to little children.)

Actually, I was looking at the book for the first time in years, trying
to use it as a refferance text. I gave up because the damn thing had so
much fluff as to be a waste of time... (Is there a paper on the theory?)

IvoryTowerism: You have to sneak onto a university campus to get
anywhere near a reasonably complete library/bookstore and then pay
black-market prices to cart one off...  =(((
Rant disengaged

  This system is still too dynamic, we want to ground it in a more 
 stable system. We create two classes of state, a persistant structural 
 state and a dynamic state that expresses the present activation of the
  persistant state. In almost all higher animals, a sleep period is
  required to clear the chaotic dynamic state of the matrix and
  re-initialize it from the persistant state. The reset process occours
  during delta wave sleep and the re-init process occours during beta 
 wave sleep. Also during this time, the almost totally unbiased 
 computational matrix which is the cortex is programmed through a 
 program running on a small subset of the cortex loaded from what is 
 essentially a ROM being the Amigdalya and hypothalamus as well as 
 certain structures in the reticular formation.

 Incorrect.  The cortex has genetically pre-programmed systems.  It 
 cannot be said that is a matrix loaded with software from subcortical 
 structures..

Your are actually agreeing with me. =P 

The brain does have an innate structure in the form of the topology I
mentioned earlier. This topology naturally leads to the development of
functional systems. HOWEVER, there is no law in the *cortex* which
governs what behaviors it will produce (likes, dislikes etc...) these
must be inputed either from the environment or from the subcortical
structures.

  The neocortex, as far as I know, is fairly uniform in general 
 algorithm. We only need to wire it up slightly differently for each 
 region. I don't know wheather this applies to the older cortical 
 regions such as the hypocampus as well. I do know that the latter 
 structures use a different and moderately less complex algorithm...

 It is not, in fact, fairly uniform.  It varies in architecture (the 
 type  percentage of various cell types as well as layer thickness) as 
 well as by connectivity with other structures.  The variations are on 
 the scale of millimeters, so there will be quite alot of them.

Yes, and I don't think those varriations in layers or even connectivity
are at all significant. Ofcourse you want to know which layer is for
input and which layer is for feedback but you don't really worry
yourself about the measurements which are probably a 

RE: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Billy Brown
Ben Goertzel wrote:
 I like to distinguish two kinds of specialized mechanisms:

 1) those that are autonomous

 2) those that build specialized functionality on a foundation of
 general-intelligence-oriented structures and dynamics

 The AI field, so far, has focused mainly on Type 1.  But I think Type 2 is
 more important.

Hmm. Well, using your terminology, I would say that:

1) Type 2 mechanisms are only possible once you have the proper set of type
1 mechanisms (i.e. the ones that implement thought in the first place).
2) Type 2 mechanisms that are not supported by the proper type 1 mechanisms
for a particular problem domain tend to be astronomically inefficient.
3) Achieving a human-like generality of intelligence is likely to require a
human-like assortment of Type 1 mechanisms, except in areas where you can
afford astronomical inefficiency.

An obvious example of 2 is the world-model problem in robotics. If a dumb AI
doesn't have a specialized mechanism for dealing with physical objects
interacting in 3-D space, it just gets stuck. A smart AGI might be able to
fake it by reasoning about the same data in a more abstract fashion, but
this is like a human trying to aim a tennis serve with a physics book and a
calculator - slow and error prone.

One interesting prediction of this view is that it should be very easy to
build an AI that seems promising in a domain much broader than those
addressed by expert systems (like data analysis or even logical
reasoning), and yet fails miserably when you try to introduce it to some
other challenge humans consider routine (like predicting where a tennis ball
will go after it gets hit). In other words, the brittleness problem may be
intractable.

 I think the general intelligence mechanisms for vision occurs at a much
 lower level than your example suggests.

 I think that object recognition and world-model-building, for example, use
 Type 2 specialization, not Type 1

In the case of object recognition, that would be possible but amazingly
inefficient compared to a type 1 approach. For a world model I don't see how
it is possible at all, unless you artificially limit what kinds of facts
about the world you need to work with.

 I think that is what the bulk of academic AI researchers are doing.  The
 folks on this list who are actively working on AI tend to be exceptions,
 with more ambitious goals.

 Again, the contemporary mainstream AI field is really very conservative,
 concerned entirely with taking small steps in a risk-averse way.

 Nearly all contemporary AI researchers are not actively seeking AGI at
all;
 by and large, they think it's hundreds of years off, and are working on
 highly specialized algorithms attacking subproblems of intelligence.
Which
 seems to be exactly what you think they should be doing!

Not exactly. It isn't that I think we should give up on AGI, but rather that
we should be consciously planning for it to take several decades to get
there. We should still tackle the problems in front of us, instead of giving
up on real AI work altogether. But we need to get past the idea that every
AI project should start from scratch and end up delivering a
human-equivalent AGI, because that isn't going to happen. We just aren't
that close yet.

The way the software industry has solved big challenges in the past is to
break them up into sub-problems, figure out which sub-problems can be solved
right now, solve them as thoroughly as possible, and offer the resulting
solutions as black boxes that can then become inputs into the next round of
problem solving. That's what happened with operating systems, and
development environments, and database systems. If we want to see real
progress in AI, the same thing needs to happen to problems like NLP,
computer vision, memory, attention, etc.

Too bad there isn't much of a market for most of those partial solutions...

Billy Brown

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Brad Wyble
 
 [META: please turn line-wrap on, for each of these responses my own
 standards for outgoing mail necessitate that I go through each line and
 ensure all quotations are properly formatted...]

I think we're suffering from emacs issues, I'm using elm.

 
 Iff the brain is not unique in its capability to support intelligence
 then all of this can be replaced by some abstract model with the same
 basic computational charactaristics but in a very different way. 

I totally agree.  But the genesis of this debate was whether the brain is 
complicated in a non-trivial way.  The fact that it is complicated does not mean it 
cannot be replicated in a different substrate (and like Ben, I think it would be a 
misapplication of effort to try).  

 
  The implementation details are what tells you how the brain 
  functions.
 
 I don't care _HOW_ it functions, I care about _WHAT_ a given section
 accomplishes through its functioning. 

The nature of neuroscience research doesn't really differentiate between the two at 
present.  In order to understand WHAT a brain part does, we have to understand HOW it, 
and all structures connected to it function.   We need to understand the inputs and 
the outputs, and that's all HOW.  

There are people who approach the problem from a purely black-box perspective of 
course, by giving people memory tests and looking at the pattern of failures.  This is 
extremely interesting work, particularly as regards the types of errors people make 
while speaking.  (http://www.wjh.harvard.edu/~caram/pubs.htm)
I don't think it's sufficient, on its own, to figure out the brain without 
simulteanously looking at the neural data.  

 
 Given that, it should be relatively streight forward to find a
 work-alike 

well, it just isn't.  Brains are hard to reverse engineer, and that's basically what 
you're talking about.

 
 Failing that, it is still possible to set up a system akin to Creatures
 but with a much more powerful engine and wait untill a good'nuff
 algorithm evolves on its own... 

It took evolution billions of years with an enormous search space.  Obviously we can 
speed the process.  But in the end, you'd end up an equally inscrutable mass of neural 
tissue.  You'd be better off getting yourself a real kid :)

 
 rant mode engaged
 I HATE IVORYTOWERISM!!!
 IF A BOOK DOESN'T TELL IT LIKE IT IS, IT SHOULD NEVER BE PUBLISHED, EVEN
 TO LITTLE CHILDREN!! (Especially not to little children.)

My comment was in the context of you saying that the brain is fantastically simple 
and then citing Calvin as a source for your conclusion.  I'm saying that books by pop 
authors are insufficient to draw conclusions from, not that they are useless.  His 
ideas are great, I love his work.  

 
 The brain does have an innate structure in the form of the topology I
 mentioned earlier. This topology naturally leads to the development of
 functional systems. HOWEVER, there is no law in the *cortex* which
 governs what behaviors it will produce (likes, dislikes etc...) these
 must be inputed either from the environment or from the subcortical
 structures.

I disagree with this, but I see where you are coming from.  We don't know enough about 
the cortex to say things like this.  The reason that subcortical structures seem more 
concrete to us, is that they are simpler in design and therefore easier to understand 
than cortical structures.  


 Yes, and I don't think those varriations in layers or even connectivity
 are at all significant. Ofcourse you want to know which layer is for
 input and which layer is for feedback but you don't really worry
 yourself about the measurements which are probably a biproduct of having
 more neurons in those regions that are heavily connected and not, in
 themselves, interesting... The extray layers in the occipital lobe are
 probably nothing more than the equivalent of a math coprocessor in a
 computer...

The addition or deletion of layers is going to drastically change the nature of 
computations a given bit of cortex performs.  

 
  I've spent 8 years studying hippocampal anatomy.  It is fascinating and 
  highly structured in a way the cortex isn't (or its simplicity allows 
  us to perceive the structure).  Vast volumes of data about its anatomy 
  are available and I have read most of it.
 
 GIMME GIMME GIMME!!! =P

I said I read it, I didn't say I could remember all of it :)

 
   I( and the rest of the hippocampal community) am at a loss to tell you 
  how it functions. 
 
 Do we know what it does? (how its outputs relate to its inputs)

Nope.  We think it might have to do with spatial navigation in rodents (rats tend to 
think in terms of 2-D space) and more complex types of memory in higher order 
critters.  Anatomy and neurophysiology seem to suggest it should relate memory to 
motor actions and behavioral states, but lesion it and animals seem relatively 
unimpaired in that respect(lesions are a troublesome way to reverse engineer the 
brain).  *throws up 

RE: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Ben Goertzel

  I believe that the precision with which digital computers can do things,
  will allow intelligence to be implemented more simply on them
 than in the
  brain.  This precision allows entirely different structures and
 dynamics to
  be utilized, in digital AGI systems as opposed to brains.  For
 example, it
  allows correct probabilistic inference calculations (which
 humans, at least
  on the conscious level) are miserable at making; it allows compact
  expression of complex procedures as higher-order functions (a
 representation
  that is really profoundly unbrainlike); etc.


 I'd be curious to hear more about what you mean by this last
 statement.  You are referring to the nature of nesting complex
 function calls within one another?

 Brad

No, higher-order functions is a technical term from the theory of
functional programming.

It refers to the use of functions that have functions as arguments.

For instance, the derivative operator in calculus is a higher-order
function: it maps functions into functions.

So, the type of the real function x^2 is R--R,
but the type of the derivative operator is [R--R]--[R--R]
so the derivative is a second-order function...

Programming languages like Haskell (www.haskell.org) use higher-order
functions to achieve remarkably compact programs doing very complex things.
These programs are not terribly intuitive to most humans, mainly because our
limited stack size runs into trouble when dealing with functions deeper
than maybe third-order...

Combinatory Logic, invented by Haskell Curry in the 50's, is a foundation
for mathematics based on higher-order functions, see e.g.

http://www.cwi.nl/~tromp/cl/cl.html

The Novamente design involves using higher-order functions to represent
complex procedures and patterns.  There are a lot of technical advantages to
this.  For one thing, it allows one to express extremely complex
mathematical patterns without using any variables  Having complex
patterns expressed with no variables is good for Novamente's reasoning
algorithms; variables as used in ordinary non-combinator math would
complicate things TERRIBLY (as we discovered in Webmind).

Anyway, this is a very deep and technical topic; I introduced it as an
example of the kind of direction you can get led in when you think NOT about
the human brain but rather about the FUNCTIONS carried out by the brain and
how to most effectively carry them out in a digital computer context.

Higher-order function representations are not robust in the sense that
neural representations probably are: they aren't redundant at all, one error
will totally change the meaning.  They're not brainlike in any sense.  But
maybe (if my hypothesis is right) they provide a great foundation for
complex procedure learning and pattern recognition in a digital computer
context.  They seem to integrate very nicely with the other parts of
Novamente, anyhow.

-- Ben G


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Brad Wyble
 
 Not exactly. It isn't that I think we should give up on AGI, but rather that
 we should be consciously planning for it to take several decades to get
 there. We should still tackle the problems in front of us, instead of giving
 up on real AI work altogether. But we need to get past the idea that every
 AI project should start from scratch and end up delivering a
 human-equivalent AGI, because that isn't going to happen. We just aren't
 that close yet.
 
 The way the software industry has solved big challenges in the past is to
 break them up into sub-problems, figure out which sub-problems can be solved
 right now, solve them as thoroughly as possible, and offer the resulting
 solutions as black boxes that can then become inputs into the next round of
 problem solving. That's what happened with operating systems, and
 development environments, and database systems. If we want to see real
 progress in AI, the same thing needs to happen to problems like NLP,
 computer vision, memory, attention, etc.
 

In as much as I'm a neurophile, I disagree that this is the best approach.  AI 
research has been having a hard time making progress by working on little black boxes 
and then hooking them together.  I think without the context of the whole entity (the 
top level AGI), it's harder to think about and implement solutions to the black box 
problems.  

Evolution certainly didn't work with black boxes.  It made functionally complete 
organisms at each step of the way, and I think AI design can work in the same manner.  
The progress of bottom-up, whole organism roboticism, ala Rod Brooks, is an impressive 
example of what can happen when you attack the whole organism simultaneously.  The top 
level thinking is grounded in the structure of the representations used by the lower 
level stuff that actually interacts with the world.  

Now this agrees with most of what you are saying, namely that we can't implement a 
cloud in the sky AGI that thinks in a vacuum.  But it disagrees with you in saying 
that we can't afford to work on these sub-problems without the context of the entire 
organism.

-Brad



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Ben Goertzel

 Ben Goertzel wrote:
  I like to distinguish two kinds of specialized mechanisms:
 
  1) those that are autonomous
 
  2) those that build specialized functionality on a foundation of
  general-intelligence-oriented structures and dynamics
 
  The AI field, so far, has focused mainly on Type 1.  But I
 think Type 2 is
  more important.

 Hmm. Well, using your terminology, I would say that:

 1) Type 2 mechanisms are only possible once you have the proper
 set of type
 1 mechanisms (i.e. the ones that implement thought in the first place).

Well my Type 1 and Type 2 are both specialized-intelligence mechanisms.

I also posit general-intelligence mechanisms, which are separate from Type 1
and Type 2 specialized intelligence mechanisms.

In the Novamente design, we have three generalized intelligence mechanisms:

* higher-order probabilistic inference
* evolutionary learning
* reinforcement learning

each with its own strengths and weaknesses.

We also have some complementary specialized cognitive mechanisms, like
first-order inference, neural-net-like association-finding, cluster
formation, etc.

Specialized intelligence components may be built on top of these.  For
instance, language processing uses aspects of all these (e.g. parsing is
largely unification, an aspect of higher-order inference)

Or, for something like edge detection, we would use a type 1 specialized
mechanism, and general intelligence wouldn't enter into it at all.

 But we need to get past the idea that every
 AI project should start from scratch and end up delivering a
 human-equivalent AGI, because that isn't going to happen. We just aren't
 that close yet.

I don't think all of us are trying to start from scratch.  I'm certainly
not, I'm using a lot of ideas developed by others over the past few decades.


 The way the software industry has solved big challenges in the past is to
 break them up into sub-problems, figure out which sub-problems
 can be solved
 right now, solve them as thoroughly as possible, and offer the resulting
 solutions as black boxes that can then become inputs into the
 next round of
 problem solving. That's what happened with operating systems, and
 development environments, and database systems. If we want to see real
 progress in AI, the same thing needs to happen to problems like NLP,
 computer vision, memory, attention, etc.

I completely disagree.  Building a complex self-organizing system is not
like building an ordinary engineered software system.  You can't design the
parts in isolation.  You have to design each part with explicitly
consciousness of the whole.  Which means it has to be a unified project, not
a collection of disparate subprojects aimed at producing black boxes to
later be hooked together.  This is a profound difference between minds on
the one hand, and OS's, DB's and IDE's on the other.

And I still say, this is pretty much exactly the approach that conventional
academic AI is taking.  There is a conventional breakdown of the AI problem
into subproblems (of which you've listed several), and people tend to work
on each one separately.  I don't understand how what you suggest is
different from what nearly everyone in the field is doing.

-- Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Alan Grimes
 Higher-order function representations are not robust in the sense that
 neural representations probably are: they aren't redundant at all, one
 error will totally change the meaning.  They're not brainlike in any 
 sense.  But maybe (if my hypothesis is right) they provide a great 
 foundation for complex procedure learning and pattern recognition in a 
 digital computer context.  They seem to integrate very nicely with the 
 other parts of Novamente, anyhow.

 -- Ben G

/me dispenses a You are on the right track, dude. medal.


-- 
I WANT A DEC ALPHA!!! =)
21364: THE UNDISPUTED GOD OF ALL CPUS.
http://users.rcn.com/alangrimes/
[if rcn.com doesn't work, try erols.com ]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Brad Wyble
 
  The nature of neuroscience research doesn't really differentiate 
 between the two at present.  In order to understand WHAT a brain part
 does, we have to understand HOW it, and all structures connected to it
 function.   We need to understand the inputs and the outputs, and that's
 all HOW.
  
 
 I wouldn't say even that much... The exact format of the IO is not
 necessary either but only the general Information X Y and Z is carried
 to here from here. 

We don't even know what the information is, honestly.  Cells fire spikes.  Sometimes 
there are clear behavioral correlates which makes it easy to figure out (place cells), 
usually not.  The spike firing code depends on the function of the underlying 
structures.   We have to know how they represent information to know what information 
is being transmitted.  

Understand, by the way, that there are plenty of computational and mathematical 
specialists working on this, applying plenty of information theoretic approaches.  

 
 I've seen a very interesting report on the reverse engineering of the
 hearing system though I am still months away from finishing my first
 reading of Principles of neuroscience. 

The primary modalities are the easiest systems to decode, because you can control 
precisely what the inputs are.  Those are the first systems to be decoded.

 Yes, that is because they don't constitute a computer.
 I suppose you need a really deep understanding of what computation is to
 see how the cortex is a computer (and hence has all the same properties
 of nonpredictability and such...) 

Well it computes.  So... it's a computer, sure.  Feel free to tell me more.

 
 Does it really? ;)
 I would suggest that the individual cortical columns represent a fairly
 consistient set of adaptive logic gates (of considerable complexity). I
 would further suggest that as the ferrit example showed the computation
 the cortical region performs depends mostly on where in the logic
 network the inputs are sent and the outputs taken. In this way you can
 take just about any cortical region and get it to do just about anything
 any other region does (except for the extra layers of the occipital
 lobe) just by hooking it up differently...

I don't really have any strong data for or against that hypothesis.  We're not sure 
how brittle columns are, functionally.  Simple neural net models tell us though that 
it's very easy to drastically alter the functional character of a network by changing 
one parameter.  I'll read the ferret example, but I'm guessing that all they found was 
evidence of striation, which doesn't mean the system is working correctly.  However, 
given the resilience of the brain to changes performed at a young age, it is likely 
there was some visual perception.  

 
 Where is the evidence for celular differentiation beyond the 20 or so
 classes of neurons?

I'm not talking just about neuron types, but also about connection patterns of neurons 
between and within areas.   Subregions CA3 and CA1 of the hippocampus are identical 
from a cellular composition perspective, but their connectivity patterns are so 
different that noone who studies the system would expect them to do the same thing.  
Neurophysiological evidence demonstrates that they do in fact differ in their 
functional characteristics.

 
 Absent this evidence, how can you say that a certain structure of cells
 X, Y, and Z which are arranged in layers 1-6 in cortical region A do
 something significantly different from those in region B?
 

For starters, an autoassociative network performs differently than a heteroassociative 
one.  Or add noradrenergic modulation(or one of 10+ other neuromodulators), or delete 
a subclass of GABA cells, or triple the percentage of stellate cells.  It is easy to 
make a neural network behave differently.  This is easily demonstrated with models.

   

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]