> 1. What is the single biggest technical gap between current AI and AGI? 
 
I think hardware is a limitation because it biases our thinking to focus on 
simplistic models of intelligence.   However, even if we had more computational 
power at our disposal we do not yet know what to do with it, and so the biggest 
gap is conceptual rather than technical.
 
In particular, I become more and more skeptical that the effort to produce 
concise theories of things like knowledge representation are likely to succeed. 
 Frames, is-a relations, logical inference on atomic tokens, and so on, are 
efforts to make intelligent behavior comprehensible in concisely describable 
ways, but they seem to only be crude approximations to the "reality" of 
intelligent behavior, which seem less and less likely to have formulations that 
are comfortably within our human ability to reason about effectively.  As one 
example, consider the study in cognitive science of the theory of categories -- 
from the "necessary and sufficient conditions" classical view to the more 
modern competing views of "prototypes" vs "exemplars".  All of these are nice 
simple descriptions but as so often happens it seems that the effort to boil 
down the phenomena to nice simple ideas we can work with in our tiny brains 
actually boils off most of the important stuff.
 
The challenge is for us to come up with ways to think about or at least work 
with (and somehow reproduce or invent!) mechanisms that appear not to be 
reduceable to convenient theories.  I expect that our ways of thinking about 
these things will evolve as the systems we build operate on more and more data. 
 As Novamente's atom table grows from thousands to millions and eventually 
billions of rows; as cortex simulations become more and more detailed and 
studyable; as we start to grapple with semantic nets containing many millions 
of nodes -- our understanding of the dynamics of such systems will increase.  
Eventually we will become comfortable with and become more able to build 
systems whose desired behaviors cannot even be specified in a simple or 
rigorous way.
 
Or, perhaps, theoretical breakthroughs will occur making it possible to 
describe intelligence and its associated phenomena in simple scientific 
language.
 
Because neither of these things can be done at present, we can barely even talk 
to each other about things like goals, semantics, grounding, intelligence, and 
so forth... the process of taking these unknown and perhaps inherently complex 
things and compressing them into simple language symbols throws out too much 
information to even effectively communicate what little we do understand.
 
Either way, it will take decades if we're lucky.  Moving from mouse-level 
hardware to monkey-level hardware in the next couple decades will be helpful, 
just like our views on machine intelligence have expanded beyond those of our 
forebears looking at the first digital computers and wondering about how they 
might be made to think.
 
 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=55002867-d97b38

Reply via email to