RE: [agi] Intelligence by definition

2003-01-03 Thread Ben Goertzel




Hi,

You make a lot 
of different points... I'll just grab one for the moment

  ***
  I don't believe in combining different methods 
  because cognition deals with the unknown, - we can't a priori split it into 
  different areas, except to the extent that they're sensor/hardware specific, 
  or levels, except that syntactic complexity of inputs should be sequentially 
  increased. 
  ***
  
  I 
  like to distinguish between *functional specialization* and *integrated 
  cognition*
  
  Novamente (my own AI system) has a mix of cognitive algorithms, which 
  work together to provide overall cognitive functionality. The exact 
  mixture of algorithms is determined by a bunch of parameters. This is 
  one example of "integrated cognition".
  
  Functional specialization has to do with there being modules of an 
  intelligent system devoted to particular areas like language processing, 
  vision processing, social interaction, etc. In the Novamente design, 
  each functionally specialized lobe has its own parameter values which 
  determine the specific mix of cognitive algorithms operating within it. 
  (We haven't gotten to experimenting with this yet, now we're just 
  experimenting with mixing cognitive algorithms.)
  
  Generally, a mixture of cognitive algorithms is just as capable of 
  dealing with the unknown as a single cognitive algorithm. Sometimes more 
  so 
  
  On 
  the other hand, functional specialization biases one's system to deal with 
  some parts of the space of the unknown better than others. 
  
  
  This 
  is a plus and a minus, obviously. Human cognition deals with the truly 
  unknown very slowly and awkwardly. The human brain 
  is specialized not only based on its sensors and actuators, but also for 
  linguistic processing, social interaction, temporal event processing, etc. 
  etc. etc. This means that it would not work as well taken outside of its 
  ordinary social and physical situations. But it means that its limited 
  resources are generally well deployed within its usual 
  environments.
  
  -- 
  Ben Goertzel.


Re: [agi] Intelligence by definition

2003-01-03 Thread Boris Kazachenko



Thanks for your comments!

I like 
to distinguish between *functional specialization* and *integrated 
cognition*

Novamente (my own AI system) has a mix of 
cognitive algorithms, which work together to provide overall cognitive 
functionality. The exact mixture of algorithms is determined by a bunch of 
parameters. This is one example of "integrated 
cognition".
Functional specialization has to do with there being modules of an 
intelligent system devoted to particular areas like language processing, vision 
processing, social interaction, etc. 

It seems to me 
thatconceptual difference between vision  language is inthe 
level of generalization, aside from different sensor/actuator orientation. 

Social Interaction? 
Once you start coding things that are learnable, where do you stop before ending 
up with just another expert system?
Isn't this all about 
scalable learning, which should develop environmentally specific functional 
specialization on it's own?

In the 
Novamente design, each functionally specialized lobe has its own parameter 
values which determine the specific mix of cognitive algorithms operating within 
it. (We haven't gotten to experimenting with this yet, now we're just 
experimenting with mixing cognitive algorithms.)

Generally, a mixture of cognitive algorithms is 
just as capable of dealing with the unknown as a single cognitive 
algorithm. Sometimes more so


What single algorithm? How do you evaluate 'dealing'? How do you 
derive/select youalgorithms for unknowninputs without first quantitatively defining your objectives? You 
definition of intelligence doesn't seem to be functional to me, goals can't be 
defined solely by their complexity.
Without deductive 
derivation we are stuck with trial  error, which can take 
millenia.

On the other hand, functional specialization 
biases one's system to deal with some parts of the space of the unknown better 
than others. 
This is a plus and a minus, obviously. 
Human cognition deals with the truly unknown very slowly and 
awkwardly. 

I mean 'unknown' not to the cognitive system but to it's 
designer. Also, the reason human learning is so slow is 'hardware' - specific: 
it takes a lot longer to build new connections than to access them. That's not 
the case for computer hardware.

The human brain is specialized not only based on 
its sensors and actuators, but also for linguistic processing, social 
interaction, temporal event processing, etc. etc. etc. This means that it 
would not work as well taken outside of its ordinary social and physical 
situations. But it means that its limited resources are generally well 
deployed within its usual environments.

That's true, but 
human brain is an accident of incremental  obviously unfinished evolution, 
not some grand design. Besides, I think to some extent these different 
areas are specialized not so much by genetic design but by the impact of the 
input types they recieve. In any case, you must admit, this stone age 'design' 
doesn't perform very well now  it willget worse as the changes 
accelerate.

Regards!
Boris.




RE: [agi] Intelligence by definition

2003-01-03 Thread Ben Goertzel




hi,

***
It seems to me thatconceptual 
difference between vision  language is inthe level of generalization, 
aside from different sensor/actuator orientation. Social Interaction? Once you start coding things that 
are learnable, where do you stop before ending up with just another expert 
system?Isn't this all about scalable 
learning, which should develop environmentally specific functional 
specialization on it's own?
***

Well, in Novamente we are not coding*specific 
knowledge*thatis learnable... but we are coding implicit knowledge 
as to what sorts of learning processes are most useful in which specialized 
subdomains...


***
What single algorithm? How do you evaluate 
'dealing'? How do you derive/select youalgorithms for 
unknowninputs without first 
quantitatively defining your objectives? You definition of intelligence doesn't 
seem to be functional to me, goals can't be defined solely by their 
complexity.

Without deductive 
derivation we are stuck with trial  error, which can take 
millenia.
***

The Novamente design 
is mathematically formulated, but not mathematically derived. That is, 
individual formulas used in the system are mathematically derived, but the 
system as a whole has been designed by intuition (based on integrating a lot of 
different ideas from a lot of different domains) rather than by formal 
derivation.

In my view, we are 
nowhere near possessing the right kind of math to derive a realistic AI design 
from definitions in a rigorous way. Juergen Schmidhuber's OOPS system is 
an attempt in this direction, but though I like Juergen's work, I think this 
design is too simplistic to be a functional 
AGI.

http://www.idsia.ch/~juergen/oops.html

Maybe further 
work in the OOPS direction will yield something like what you're 
suggesting...

***
Also, the reason human learning is so slow 
is 'hardware' - specific: it takes a lot longer to build new connections than to 
access them. That's not the case for computer hardware.
***

I don't think you're right about 
the reason human learning is so slow. It is not just hardware 
inefficiency, it is the fact that a lot of trial-and-error-based algorithms are 
used in the brain.

***That's true, 
but human brain is an accident of incremental  obviously unfinished 
evolution, not some grand design. Besides, I think to some extent these 
different areas are specialized not so much by genetic design but by the impact 
of the input types they recieve. In any case, you must admit, this stone age 
'design' doesn't perform very well now  it willget worse as the 
changes accelerate.
***

The human brain has many flaws and 
is not a perfect guide for AGI, but it has far more general intelligence than 
any existing computer program, and so it is certainly worth carefully studying 
when designing a would-be AGI system.

Novamente is intended to ultimately go beyond what the 
human brain can accomplish, but for version 1 we'll be contented to achieve 
human-level general intelligence ;-)

-- Ben 
Goertzel



Re: [agi] Intelligence by definition

2003-01-03 Thread RSbriggs
 
Well, in Novamente we are not coding *specific knowledge* that is learnable... but we are coding implicit knowledge as to what sorts of learning processes are most useful in which specialized subdomains...


I'm reminded of an AI pioneer who once commented on this same situation - he closed his eyes, pretending that there wasn't a grad student in the room.

===bob briggs



Re: [agi] Intelligence by definition

2003-01-03 Thread Boris Kazachenko



Well, in Novamente we are not coding*specific 
knowledge*thatis learnable... but we are coding implicit knowledge 
as to what sorts of learning processes are most useful in which specialized 
subdomains...
***
I don't know, from where I sit this distinction is 
artificial. Learning is generally defined as projected compression, complexity 
of methods to achieve it can be sequentially increased as long asit 
produces positive additional compression minus the expense,- until it matches 
complexity of the inputs. In other words, optimal methods themselves should be 
learned.

The Novamente design 
is mathematically formulated, but not mathematically derived. That is, 
individual formulas used in the system are mathematically derived, but the 
system as a whole has been designed by intuition (based on integrating a lot of 
different ideas from a lot of different domains) rather than by formal 
derivation.
In my view, we are 
nowhere near possessing the right kind of math to derive a realistic AI design 
from definitions in a rigorous way. 


To select formulas you must have an implicit criterion, 
why not try to make it explicit? I don't believe we need complex math for AI, 
complex methods can be universal, - generalization is a reduction. What we need 
is a an autonomously scalable method.

Juergen Schmidhuber's 
OOPS system is an attempt in this direction, but though I like Juergen's work, I 
think this design is too simplistic to be a functional 
AGI.
http://www.idsia.ch/~juergen/oops.html

Thanks, I am looking 
at it. I noticed that he starts with a known probability distribution, to me 
that suggests that the problem is already solved 
;-)

I don't think you're right about 
the reason human learning is so slow. It is not just hardware 
inefficiency, 

Of course, that's only part of 
it.

it is the fact that a lot of 
trial-and-error-based algorithms are used in the 
brain.

I call it search.

The human brain has many flaws and 
is not a perfect guide for AGI, but it has far more general intelligence than 
any existing computer program, and so it is certainly worth carefully studying 
when designing a would-be AGI system.

How about using it? 
;-)

Boris.


RE: [agi] Intelligence by definition

2003-01-03 Thread Ben Goertzel




***
Well, in 
Novamente we are not coding*specific knowledge*thatis 
learnable... but we are coding implicit knowledge as to what sorts of learning 
processes are most useful in which specialized subdomains...
---

I don't know, from where 
I sit this distinction is artificial. Learning is generally defined as projected 
compression, complexity of methods to achieve it can be sequentially increased 
as long asit produces positive additional compression minus the expense,- 
until it matches complexity of the inputs. In other words, optimal methods 
themselves should be learned.
***

Yes, if you have a huge 
amount of space and time resources available, you can start your system with a 
blank slate -- nothing but a very simple learning algorithm, and let it learn 
how to learn, learn how to structure its memory, etc. etc. 
etc.

This is pretty much what 
OOPS does, and what is suggested in Marcus Hutter's related 
work.

It is not a practical 
approach, in my view. My belief is that, given realistic resource 
constraints, you can't take such a general approach and have to start off the 
system with specific learning methods, and even further than that, with a 
collection of functionally-specialized combinations of learning 
algorithms. 

I could be wrong of 
course but I have seen no evidence to the contrary, so 
far...


*** 
The Novamente design is mathematically formulated, but not 
mathematically derived. That is, individual formulas used in the system 
are mathematically derived, but the system as a whole has been designed by 
intuition (based on integrating a lot of different ideas from a lot of different 
domains) rather than by formal derivation.

In my view, we are nowhere near possessing 
the right kind of math to derive a realistic AI design from definitions in a 
rigorous way. 

  ---
  
  To select formulas you must have an implicit 
  criterion, why not try to make it explicit? I don't believe we need complex 
  math for AI, complex methods can be universal, - generalization is a 
  reduction. What we need is a an autonomously scalable method.
  ***
  
  Well, if you know some 
  simple math that is adequate for deriving a practical AI design, please speak 
  up. Point me to the URL where you've posted the paper containing this 
  math! I'll be very curious to read it 
  ;-)
  
  
  Juergen 
  Schmidhuber's OOPS system is an attempt in this direction, but though I like 
  Juergen's work, I think this design is too simplistic to be a functional 
  AGI.
  http://www.idsia.ch/~juergen/oops.html
  ---
  Thanks, I am looking 
  at it. I noticed that he starts with a known probability distribution, to me 
  that suggests that the problem is already solved 
  ;-)
  
  
  He starts with 
  a known pdf for theoretical purposes. He is proving that his system can 
  work effectively for ANY given probability distribution. He is not 
  assuming that his system is somehow fed the pdf in 
  advance.
  
  
  -- 
  Ben