Breakthroughs happen every day. The problem is we cannot process enough
information to see them all. But I guess we'll know the AGI breakthrough
either when we see it, or when we create it. One of the two.
I think Watson is a component. I think SIRI is a component. I think SIRI +
Watson is a good start. In 5 years SIRI might be equivalent to HAL. We'll
see. SIRI has a modular system that they can just keep connecting to more and
more service providers. Might be a good business to make specialized AI
components that SIRI can connect to. Like a special type of reasoner. Just a
thought.
Mental simulation may address the sustained exploration of ideas issue. I just
haven't heard of many simulators that take in external information and are
allowed to run indefinitely. Perhaps that's a good research direction.
Multiple path problems just need good generic evaluation functions and the
capacity to do very large scale search.
Coordination is adding inferenferences to a model. Integration is combining
individuals.
You don't have to represent ALL the attributes of a concept. Inheritance is a
cheap and economical way to represent the attributes of immediate concern to a
problem.
Good response though.
Cheers.
~PM.
A breakthrough in AGI will be immediately obvious because it will not need to
be "tweaked" for 60 years - or even 10 years - to get it to work. Once it was
figured it out someone would be able to implement simple models that would
confirm the viability of the methods within a few weeks or a few months. We
have computers that are powerful enough to run intense simulations or do
extensive searches; we are just lacking some fundamental programming or
hardware design that would make visible improvements in AGI viable.
I can't think of specific examples of where AGI algorithms fail because with
good programs they fail or would fail whenever an effort is made to take the
program beyond a fairly low level of achievement. (I can't tell if Watson is a
viable model for general intelligence or not because I don't know how it works,
but right now it seems like it is unable to learn to work with combinatorial
uncertainty and multiple path integration issues.)
As long as the problem is kept simple feasible AGI programs can learn by using
a method of validification through some kind of acknowledgement. What I am
saying is that I could write a simple effective AGI program that could learn a
few hundred "ideas" (or idea-like knowledge objects) and use them in ways that
correspond to the ways that the program had seen them used. However, this
program could not continue to learn new ideas that could be used the way human
beings would find familiar because they would be so severly limited. So we
have programs that can learn to understand a great deal of speech or to
translate from one language to another or to detect some handwritten characters
but it is done without much flexibility or the capability to explore novel
paths of insight in any but the simplest or most structured ways. No one can
deny that these programs are viable examples of artificial intelligence but
they always seem to lack the spark that only novel thinking can provide.
So the examples of problems that haven't been solved can be best specified as
kinds of problems. One thing that AI programs haven't been able to do is to
effectively use sustained exploration of ideas or concepts in order to solve
novel kinds of problems (that had not been specified by using well formed
narrow method like a highly specified mathematical formula or which are simple
enough to be solved by a contemporary neural network.)
A multiple path problem is one in which different paths of reasoning can be
used to arrive at a conclusion. Multiple path problems should be easy for
actual AGI programs because they usually have common nodes where divergent
paths toward a solution can be taken. This would give an AGI program the
advantage to try another path to take if it got stuck and it should give the
AGI program ample opportunities to learn about different strategies. However,
I don't know of any AGI program that is able to solve problems like this
(except for actual path taking when the paths are reasonably easy to traverse)
and I think it is specifically because the multiple path problems also present
combinatorial complexity to genuine learning algorithms. Some board games,
like chess, are multiple path games, and here AI is able to do well by using an
artificial position evaluation method. So we have a good chess model that
works specifically by choosing only the best path that deep searches can
provide using a position evaluation algorithm and even today most chess
programs do not actually go through much learning outside the most mundane
employment of record keeping. The problem is that it is almost impossible (or
it is currently impossible) to find the different ways to efficiently represent
the characterizations of an event so that it could be grouped with other events
that share some similarity with it. Because we are able to use reasoning that
goes beyond superficial similarities we are able to find hundreds or thousands
of possible associations from one concept to another. This richness in
potential comes at a cost of overwhelming complexity.
You talked about coordination using the attributes of a concept, but when you
offer some examples they are predictably artificial and insipid. (That isn't
an insult, typical examples are insipid because they are so concise.) Part of
this might be due to the time it would take to represent all the attributes of
a concept but if you were to start to list all the attributes of a concept that
you could think of, the potential to find how that concept could be related to
other concepts would make the complications and complexity of that method
plain. And that doesn't even take the added complications of exploring
tentative hypotheses to account.
Jim Bromer Jim Bromer
AGI | Archives
| Modify
Your Subscription
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com